text
stringlengths
0
2.11M
id
stringlengths
33
34
metadata
dict
Patricio Sanhueza [email protected] National Astronomical Observatory of Japan, National Institutes of Natural Sciences, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan School of Mathematical and Physical Sciences, University of Newcastle, University Drive, Callaghan NSW 2308, Australia Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA Departamento de Astronomía, Universidad de Chile, Camino el Observatorio 1515, Las Condes, Santiago, Chile National Astronomical Observatory of Japan, National Institutes of Natural Sciences, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA European Southern Observatory (ESO) Headquarters, Karl-Schwarzschild-Str. 2, D-85748 Garching bei München, Germany National Astronomical Observatory of Japan, National Institutes of Natural Sciences, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, JapanThe Infrared Dark Cloud (IRDC) G028.23-00.19 hosts a massive (1,500 ), cold (12 K), and3.6-70 μm IR dark clump (MM1) that has the potential to form high-mass stars. We observedthis prestellar clump candidate with the SMA (∼35 resolution) and JVLA (∼21 resolution) in order to characterize the early stagesof high-mass star formation and to constrain theoretical models. Dust emission at 1.3 mm wavelengthreveals 5 cores with masses≤15 . None of the cores currently have the mass reservoirto form a high-mass star in the prestellar phase.If the MM1 clump will ultimately form high-mass stars, its embedded cores must gather a significant amount of additional mass over time. No molecular outflows are detectedin the CO (2-1) and SiO (5-4) transitions, suggesting that the SMA cores are starless.By using the NH_3 (1,1) line, the velocity dispersion of the gas is determined to be transonic or mildly supersonic(Δ V_ nt/Δ V_ th∼1.1-1.8). The cores are not highly supersonic as some theories of high-mass star formation predict. The embedded cores are 4 to 7 times more massive than the clump thermal Jeans mass and the most massive core (SMA1) is 9 times less massive than the clump turbulent Jeans mass. These values indicate that neither thermal pressure nor turbulent pressure dominates the fragmentation of MM1. The low virial parameters of the cores (0.1-0.5) suggest that they are not in virial equilibrium, unless strong magnetic fields of ∼1-2 mG are present.We discuss high-mass star formation scenarios in a context based onIRDC G028.23-00.19, a study case believed to represent the initial fragmentation of molecular clouds that will form high-mass stars. § INTRODUCTIONFor many decades, the study of high-mass star formation has been biased toward the more evolved, brighter, and more easily detectedprotostellar phases. In recent years, the study of the elusive prestellar phase, before the existence of embedded heating sources,has been recognized as key in constraining models of high-mass star formation <cit.>. However, due to the small known sample of prestellar sources that have the potential to form high-mass stars (>8 ), the current observational evidenceis inconclusive and sometimes support or refute the same theoretical predictions. Seen as dark silhouettes against the Galactic mid-infrared background in Galactic planesurveys (ISO, ; MSX, , ; Spitzer, ,), infrared dark clouds (IRDCs) are believed to host the earliest stages of star formation. Several studies have investigated the kinematic and filamentary structure of IRDCs <cit.>, as well as their chemistry <cit.>.Evidence of active high-mass star formation in IRDC clumps[Throughout this paper, we usethe term “clump” to refer to a dense object within an IRDC with a size ofthe order ∼0.2–1 pc, a mass of ∼10^2–10^3 , and a volume density of ∼10^4–10^5 cm^-3.We use the term “core” to describe a compact, dense object within a clump with a size of ∼0.01–0.1 pc, a mass of ∼1-10^2 , and a volume density ≳10^5 cm^-3.] is inferred by the presence of ultracompact (UC)regions <cit.>, thermal ionized jets <cit.>,hot cores<cit.>, embedded 24 μm sources <cit.>, molecular outflows <cit.>, or maser emission <cit.>. On the other hand,IRDC clumps with similar masses and densities that lack all the previously mentioned star formation indicatorsare the prime candidates to be in the prestellar phase. The prestellar phase still remains the leastcharacterized and understood stage of the formation of high-mass stars. Recently, several high-mass cluster-forming clumps that are candidates to be in the prestellar phase have been found mainly using Herschel observations <cit.> in combination with line surveys <cit.>. However, only few targets have been followed up in detail to confirm the lack of star formation. A prestellar, high-mass cluster-forming clump completely devoid of active high-mass star formation <cit.> and prestellar cores embedded in high-mass cluster-forming clumps <cit.>stand out as the best candidates to study the prestellar phase in the high-mas star formation regime because they have been studied in depth using space-bornetelescopes, single-dish ground-base radio telescopes, and radio interferometers. Prestellar core candidates that canform high-mass stars have been exclusively found only toward active high-mass cluster-forming clumps.The prestellar core masses have a few tens of solar masses with volume densities larger than 10^5 cm^-3 <cit.>. In these cores, there is strong observational evidencethat turbulence alone cannot provide sufficient support against gravity to avoid rapid collapse <cit.>. Magnetic fields, which are known to play an important role in the formation of dense cores in more evolved massive clumps <cit.>, may also be important in clumps at earlier stages of evolution.§.§ High-Mass Star Formation TheoriesCurrent theories of high-mass star formation can be primarily separated in the way that the stars acquire their mass from theenvironment: core accretion (“core-fed”) and competitive accretion (“clump-fed”).The turbulent core accretion model <cit.> posits that all stars form by a top-down fragmentation process in which a cluster-forming clump fragmentsinto cores under the combined effects of self-gravity, turbulence, and magnetic fields. These cores are gravitationally bound and they are the entities that directly feed the central protostars (“core-fed”). The pressure supportthat maintains the cores close to internal virial equilibrium is provided by turbulence and/or magnetic fields. Cores have no significant further accumulation of gas from the surrounding medium, implying that the final stellar mass is smaller thanthe core mass. The core mass is set at early times and,thus, in order to form ahigh-mass star, a high-mass core must exist in the prestellar phase. Therefore, the core accretion theory predicts a direct relationship betweenthe distribution function of core masses, known as the core mass function (CMF), and the mass distribution function of newly formed stars, known as the the initial mass function, IMF <cit.>. However, it is not clear what prevents a high-mass corefrom fragmenting into several low-mass cores. According to <cit.>, the heat produced by accreting low-massstars in regions with surface densities of at least 1 g cm^-2 can halt fragmentation of the high-mass core. However, this mechanism to preventfragmentation has been questioned by <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>. <cit.> and <cit.> suggest thatmagnetic fields, combined withradiative feedback, can strongly suppress core fragmentation. However, direct observations of magnetic fieldsin very early stages of high-mass star formation remain difficult, although observations in regions with high-mass protostars appear to indicate that magnetic fields are important <cit.>. Competitive accretion models posit that a cluster-forming clump fragments into cores with masses close to the thermal Jeans mass <cit.>,∼2at a volume density and temperature of 5×10^4 cm^-3 and 12 K, respectively. None of these cores are massive enough to form a high-mass star. However, the cores that are located at the center of the clump's gravitational potential can accrete, via modified Bondi-Hoyle accretion, sufficient mass over time to grow and eventually form high-mass stars.One important distinction from the turbulent core accretion model is that the mass reservoir available to form the high-mass starsis accreted from material well beyond the original cores, and the gas is funneled down to the center of the clump due to the entire clump's gravitational potential by a large-scale infall (“clump-fed”). Thus, the core mass is gathered during the star formation process itself and is not set in the prestellar stage. The final stellar mass is therefore predicted to be larger than the initial core mass. According to competitive accretion, thereare no high-mass prestellar cores, which is indisagreement with the turbulent core accretion model. A consequence of competitive accretion is that high-mass stars would be always formed near the center of stellar clusters. <cit.> suggest that a subvirial state is required for competitive accretionto allow the formation of high-mass stars. However, <cit.> show that this is not the case for the simulations in <cit.>. On the other hand, in the simulations of <cit.>, the material near the vicinity of the newly formed stars is not supported byturbulence nor magnetic fields, and it is typically in a state of rapid collapse (subvirial).§.§ IRDC G028.23-00.19 In this work, we aim to characterize the prestellar phase in the high-mass regime in order to test predictions of high-mass star formation theories. <cit.> studied a prestellar, high-mass cluster-forming clump candidate, MM1, located in theIRDC G028.23-00.19 (∼5,000 ) that has the potential to form high-mass stars. The whole IRDC has been recently observed in IR polarization, which allowed the determination of magnetic fieldstrengths of 10–165 μG in the low-density regions at pc scales <cit.>.The massive clumpMM1 is dark at Spitzer/IRAC 3.6, 4.5, and 8.0 μm <cit.>, Spitzer/MIPS 24 μm <cit.>, and Herschel/PACS 70 μm <cit.>. Remarkably, MM1 has a 24 μm optical depth (τ_24 μ m) close to unity, which is the highest in the IRDC sample studied by <cit.>. The total mass of the clump is 1,500 , the radius ∼0.6 pc, the volume density 310^4 cm^-3, and its distance 5.1 kpc <cit.>. The clump is gravitationallyunstable with a virial parameter significantly below unity (α=0.3). The spectral energy distributionfrom 250 μm to 1.2 mm and the rotational diagram of low excitation CH_3OH lines both reveal cold dust/gas emission of 12 K <cit.>. Observations at 1.3, 3.6, and 6 cm that searched for free-free emission<cit.> and H_2O and CH_3OH maser emission <cit.> have resulted in null detections. The cold temperatures,coupled with the lack of several indicators of star formation, suggest that the massive clump MM1 is a pristine prestellar clump appropriate for the study of the earliest stages of high-mass star formation. By using high-angular resolution observations from SMA (∼3.5 resolution) and JVLA (∼2 resolution), wehave searched for the embedded cores and determined their dynamical state at <0.1 pc scales. § OBSERVATIONSObservations of IRDC G028.23-00.19 were carried out with the SubmillimeterArray[The Submillimeter Array is a joint project between the Smithsonian Astrophysical Observatory and the Academia Sinica Institute of Astronomy and Astrophysics, and is funded by the Smithsonian Institution and the Academia Sinica] (SMA) and the Karl G. Jansky Very Large Array (JVLA), operated bythe National Radio Astronomy Observatory[The National Radio AstronomyObservatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.].§.§ SMA ObservationsSMA 1.3 mmline and continuum observations were taken during April 2012 and July 2013 in thecompact configuration. The projected baselines range from 10 to 69 m. The IRDC was completely mapped by combining images in a mosaic from 5separate positions, using the same correlatorsetup which covers 4 GHz in each of the lower and upper sidebands. A spectral resolution of 1.1(0.812 MHz) was used. The continuum emissionwas produced by averaging the line-free channels in visibility space. Using natural weighting, the 1σ rms noise for the continuum emissionis 0.75 mJy beam^-1. The system temperature typically varied from 150 to 220 K during theobservations. At the center frequency of 224.6975 GHz (1.3 mm), the primarybeam or field of view of SMA is 56. These SMA observations aresensitive to structure with angular scales smaller than ∼30. The final synthesized beam has a size of 41×30 with a P.A. of-25. The geometric mean of the major and minor axis is 35, whichcorresponds to a physical size of ∼0.09 pc (∼18,000 AU) at a distance of 5.1 kpc <cit.>.The data from different tracks were calibrated separately using theIDL-based MIR package and exported to CASA to be combined in the visibilitydomain for imaging. Typical SMA observations may be subject to up to 15% ofuncertainty in absolute flux scales. The quasar J1743-038 was periodicallyobserved for phase calibration. The quasars J2202+422 (BL Lac) andJ0319+415 (3C84) were used for bandpass calibration. Uranus and the brightradio continuum source MWC349A were used for flux calibration.§.§ JVLA Observations The JVLA observations consisted of two pointings at K-band (1.3 cm)covering the whole IRDC, taken in February 2012 in the C configuration. The projected baselines ranged from 50 to 3,400 m.NH_3 (J, K)=(1, 1) at 23.6944955 GHz and NH_3 (J, K)=(2, 2) at 23.7226336GHz were simultaneously observed with a spectral resolution of 0.4 (31.25 kHz) in dual polarization mode. At these frequencies, the primarybeam of JVLA is 19. These observations aresensitive to angular scales smaller than ∼1. The data were calibrated and imaged using the CASA 4.2 software package.In order to improve the S/N ratio, a Gaussian filter “outertaper” of16 was applied during the clean deconvolution, obtaining twice theoriginal synthesized beam. Using natural weighting, the synthesized beam was 23×20 with a P.A.=32 for both NH_3 lines, while the 1σ rms noise was 0.78 mJy beam^-1 per channel for NH_3 (1,1)and 0.75 mJy beam^-1 per channel for NH_3 (2,2). The conversion factorbetween mJy beam^-1 to brightness temperature in K is 0.47(1 mJy beam^-1 = 0.47 K). The geometric mean of the major andminor axis is 21, which corresponds to a physical size of ∼0.05 pc (∼11,000 AU) at a distance of 5.1 kpc.The bandpass and flux calibrations were performed by usingobservations of the quasar J1331+305 (3C286). The phase calibration wasdone by periodically observing the quasar J1851+0035.§ WILL THE CLUMP MM1 IN IRDC G028.23-00.19 FORM HIGH-MASS STARS? Current observational evidence supports the idea that the clump MM1 will form high-mass stars. In addition, the clump properties are also consistent with those used insimulations that produce stellar clusters including high-mass stars. We assessthe potential to form high-mass stars of the IRDC G028.23-00.19 below:i) Based on the observed relation between the maximum stellar mass in acluster (m_ max) and the total mass of the cluster (M_ cluster), given by <cit.> (m_ max/) = 1.2 (M_ cluster/)^0.45 ,the mass of the most massivestar that will be formed in a cluster-forming clump can be estimated. Assuming a cluster star formation efficiency of 30% <cit.>, the clump MM1 in IRDC G028.23-00.19, which has a mass of 1,500 , shouldform a stellar cluster of a total mass of 450 . According to the discussion in Section <ref>, the uncertainty in the mass is 50%. Larson's relationshipthen predicts that this clump will form one high-mass star of19 ± 4 . Even with only 5% star formation efficiency, a 8 ± 2 star should be formed.Using the IMF from <cit.>, the maximum stellar mass of a clumpcan also be estimated and is given by (see Appendix <ref>)m_ max = (0.3/ϵ_ sfe17.3/M_ clump + 1.5× 10^-3)^-0.77 . For the clump MM1, assuming 30% star formation efficiency (ϵ_ sfe),a high-mass star of 28 ± 9can be formed.ii) <cit.> find an empirical high-mass star formation threshold,based on clouds with and without high-mass star formation. They suggestthat IRDCs with masses larger than the mass limit given by m_ lim =580(r/pc)^1.33, where r is the source radius, are forming high-mass stars or will likely form them in the future. To be consistent with calculations made in our work, we use the factor of 580 directly obtained by using the <cit.> dust opacities and not the decreased (by a factor 1.5) dust opacities that lead to the original value of 870<cit.>. Applying the <cit.> relationshipto the clump MM1 (r ≈ 0.6 pc), its corresponding mass threshold is 290 , well below the measured mass of1,500 . Thus, “the compactness” (M_ dust/m_ lim) of MM1is 5.2 and it is highly likely that the clump will form high-mass stars. Based on large samples of high-mass star-forming regions, <cit.> and <cit.> suggest that high-mass stars are formed in clumps with Σ_ clump > 0.05 gr cm^-2. <cit.> suggest a significantly larger surface density threshold (0.3 gr cm^-2) for high-mass star formation based on the detection of massive outflows (outflow mass > 10 ). MM1 has Σ_ clump and Σ_ peak of0.3 and 0.4 gr cm^-2, respectively. The surface density in MM1 was calculated as Σ = M/(π r^2) withr_ clump = 0.6 pc and r_ peak = 0.14 pc using the IRAM 30 m dust continuum observations at11 angular resolution (see Figure <ref>). The surface densities are significantly larger than the lower limit suggested for high-mass star formation and consistent with the highest threshold. With a low virial parameter <cit.>, IRDC G028.23-00.19 MM1 is unstable andsubject to collapse. Therefore, several pieces of evidence indicate that IRDC G028.23-00.19 will almost certainly form a stellar cluster including high-mass stars. The question that remains open is how. Since IRDC G028.23-00.19 MM1appears to be in the prestellar stage and is likely to form high-mass stars, observations of this massive clump can be used to distinguish the early stage differences posited byhigh-mass star formation theories (see Section <ref>).§ RESULTS§.§ Dust Continuum Emission Figure <ref> shows the 1.3 mm dust continuum emission from SMA(∼35 angular resolution) in gray scale and red contours.White countours correspond tothe 1.2 mm dust continuum emission from the single-dish IRAM telescope(11 angular resolution). In the whole cloud, five SMA cores aredetected above 5σ, which corresponds to 4.5(following the procedure described in Section <ref>).The 5σ threshold was used because the sidelobe pattern produces negativeartifacts with absolute values as large as 4σ,suggesting that some of the positive detections at 3σ and4σ may be spurious sources due to side lobes instead of real sources. The cores arenamed SMA1, SMA2, SMA3, SMA4, and SMA5 in the order of decreasing peakflux (see Figure <ref> and Table <ref>). Except for SMA4, all cores were fitted by 2-D Gaussians in the CASA softwarepackage. The fittedparameters are listed in Table <ref>. The deconvolved sizewas adoptedto determine the physical size of the cores. No 2-D Gaussianfit succeeded for SMA4. The flux inside the countour defined at the4σ level was used to estimate its integrated flux. To estimate the physicalparameters of SMA4, we adopted the synthesized beam as the physical size. The parameters forSMA4 should be treated with caution because this core could be composedof a few unresolved condensations. SMA4 is not centrally peaked as the other SMA cores and it approximately has a constant brightness above the 5σ contour.From <cit.>, the high-mass clump MM1 has a 1.2 mmsingle-dish integrated flux of 1.63 Jy. Comparing this integrated flux withthe 1.3 mm SMA integrated flux of ∼80 mJy,∼8% of thesingle-dish flux is recovered by the interferometer (assumingβ=1.8 to compare the somewhat different frequencies). lccccccSMA Core Parameters0ptCore2cPositionPeak FluxIntegrated Flux Angular SizeDeconvolved SizeNameα(J2000) δ(J2000)(mJy beam^-1) (mJy) (×) (×)SMA1 18:43:30.81 -04.13.19.6 8.18 12.2 4.8 × 3.8 2.8 × 2.1SMA2 18:43:28.55 -04.12.17.6 8.14 10.2 4.5 × 3.4 2.2 × 1.1SMA3 18:43:32.36 -04.13.34.3 5.35 7.04 4.5 × 3.6 2.0 × 1.9SMA4 18:43:31.31 -04.13.16.4 5.29 9.28……SMA5 18:43:30.64 -04.13.33.1 4.80 6.97 5.0 × 3.6 3.1 × 1.5 Fitting uncertainties are <1% for the peak flux, integrated flux, and the angular size, and <3% forthe deconvolved size. For the calculation of physical properties, as discussed in Section <ref>, the uncertainties of flux and size measurements aredominated bythe absolute flux scale of SMA (15%) and the distance to the source (10%). No 2-D Gaussian fit was reliable for SMA4. In order to estimateits total flux, the flux inside the contour defined at 4σ wasintegrated. Its adopted size is the SMA synthesized beam. §.§ SiO and CO Emission High velocity gas in the SiO and CO lines is frequently interpreted as molecularoutflows, and thus they can reveal deeply embedded active star formation that can be undetected at IR wavelengths. We searched for emissionfrom the SiO (5-4) transition andfound no detection in IRDC G028.23-00.19 MM1 at a sensitivity of 38 mJy beam^-1 per channel of 1.1 .The emission from the CO (2-1) transition is detected but the line profiles are heavily affected by self-absorption and/ormissing flux. At a sensitivity of 40 mJy beam^-1 per channel of 1.1 , there isno evidence of wing emission that indicates protostellar outflows. Therefore,at the sensitivity level of these observations, we confirm that the cores embedded in IRDC G028.23-00.19MM1 clump are starless. <cit.> found SiO (2-1) emission to the north and south of MM1. With the SMAobservations, we confirm the absence of molecular outflows and the more likely mechanisms for releasing SiO to the gas phaseare large scale shocks rather than active star formation. §.§ NH_3 Emission§.§.§ Images Figure <ref> shows, in color scale, themoment 0 (integrated intensity) map of the five NH_3 hyperfine linesoverlaid with the 1.3 mm dust continuum emission from SMA. The data above 2.5σ per channel were used for making the moment map. NH_3 emission is generally associated with dust emission, although themolecular emission is more spatially extended than the dust emission andis sometimes detected in regions without a dust counterpart. The global, large-scale kinematics of the IRDC will be studied in detail in a following paperin which we will recover the missing flux by combining the NH_3 JVLA interferometric observations with the single-dishGreen Bank telescope (GBT) observations. In this paper, we focus on the compact cores detected in the prestellar, high-mass clump at the center of the IRDC. Figure <ref> shows an image of the central region containing four of thefive SMA dust cores. This region corresponds to the central part of the massive clump MM1. The four panels show in: (a) the moment 0 map of the5 NH_3 (1,1) hyperfines, (b) the NH_3 (2,2) line, (c) the 4 NH_3 (1,1)satellites, and (d) the NH_3 (1,1) main component, in color-scaleoverlaid with black contours that correspond to the 1.3 mm dust continuumemission from the SMA. The NH_3 emission peaks do not overlap with the dust peaks, and remarkably, theNH_3 emission seems to avoid the dust cores (except in SMA5). This is more evident towardsthe SMA1 and SMA4 cores in panel (d). As can be seen in Figure <ref>, theNH_3 emission weakens toward the center of SMA1 and SMA4, independently of the transition usedto make the moment map. This is likely produced for the combination of optical deptheffects and depletion. The satellites could be self-absorved and become weak towardthe densest parts in the cores. The NH_3 (2,2) may not be excited at the low temperatures near de core's centers. In addition, at the low temperatures and high densities of the core's centers, NH_3 could be frozen out onto dust grains.§.§.§ Spectra Figure <ref> shows the NH_3 (1,1) and (2,2) spectra toward selected positionsin the clump MM1. At the position of the SMA5, the NH_3 (1,1) shows the normal relative intensity between the main componentand the four satellites, i.e., the main component brighter than thesatellites.At the position of the SMA1 and SMA4 cores, the main component is weaker than thesatellites, while at the intermediate position labeled as “A” in Figure <ref>,the relative intensity among transitions is approximately unity. The relative optical depths between the main NH_3 (1,1) component and thesatellites is determined by quantum mechanics according to their statisticalweights. However, the observed relativeintensity can be modified by optical depth effects: in the optically thin limit,the relative intensity will equal the ratio of the statistical weights,whereas in the optically thick limit, the relative intensity will equal 1.Toward somepositions in the clump MM1, the relative intensity of ∼1 likely indicateshigh optical depths. However, an additional explanation is required toexplain why in some places the main hyperfine component is weaker than thesatellites. This unique feature is likely produced by the self-absorption of the cold gas in SMA1 and SMA4, as explained below.The critical density of NH_3 (1,1) is a few times 10^4 cm^-3. As determined in Section <ref>, the gas in the cores havedensities of ∼10^6 cm^-3. At these high densities, the gas inthe cores should bethermalized and LTE should hold. As will be discussed inSection <ref>, thetemperature of the gas derived by using single-dish telescopes andinterferometers points to a common gas temperature of ∼12 K. Unfortunately, adirect estimation of the temperatureat the position of the SMA dust cores with the JVLA NH_3 data cannot be made becausethe main (1,1) component is less bright than its satellites. The emission from the main hyperfineis weaker than the satellites only at the position of the dust cores. This localized self-absorptionrules out missing flux as a plausible explanation. Missing extended NH_3 emission would affecta much larger area and for no reason only specifically the dust cores. If a point-like warmer sourceis deeply embedded in the SMA cores, the surrounding, colder medium could absorb the warmeremission. As a result, one would observe a spectrum with a similar shape to the observed SMA spectrumat the core positions, except that emission lines should be generally brighter than the surrounding,colder medium (specially the satellites). In the clump MM1, we see that the satellite lines decreasein intensity toward the dust cores, which is opposite to the expected if there is an embedded warmer source. On the other hand, to explain the abnormal NH_3 profile, we suggest that the more diffuse(3×10^4 cm^-3) gas/dust in the IRDC is at ∼12 K, whereas the material in the densecores is colder (<12 K). Thus, from our line-of-sight, we see that the cold core gas absorbs the backgroundwarmer, more diffuse IRDC gas. In this picture, we assume that the surrounding medium is not symmetrically distributedaround the cores and that there is an excess of background IRDC gas with respect to foreground IRDC gas.The peculiarity of the NH_3 spectra at the position of SMA1 and SMA4 would be theresult of cold gas without internal heating sources, supporting the starless status of the cores.§.§.§ Line Widths[We refer to line width (Δ V) to the full width half maximum (FWHM)of molecular line emission. The line width relates to the velocity dispersion (σ)by Δ V=2√(2ln 2) σ≈2.35σ.] Line widths are required to determine the turbulence of the gas, the virial mass, and thevirial parameter. A multi-Gaussian function with fixed frequency separationbetween hyperfine transitions was fitted to an averaged spectra of 25 pixels(approximately the NH_3 beam size) centered on the SMA core positions.The line widths of the outer left hyperfine (JKF_1=111→ 110)associated with each SMA core are displayed in Table <ref>. When line widths approach thethermal line width (∼0.2at 12 K), the magnetic hyperfine splitting can be resolved <cit.>, except for the outer left hyperfine. The twomagnetic hyperfine transitions that form part of the outer left hyperfine areonly separated by 0.14(11 kHz) and may be barely resolved in cold,low-mass star-forming regions. In IRDC G028.23-00.19, all otherhyperfines show 20% or larger line widths than the one measured for the outer left hyperfine, indicatingthat the magnetic hyperfine splitting is becoming relevant at the linewidths observed in this IRDC (although they are not resolved). The observed line widths (Δ V_ obs) of the outer lefthyperfine displayed in Table <ref> have an average of 0.9 ,which is roughly twice the channel width of the observations (0.4 ). The deconvolvedline width (Δ V_ dec) inis determined followingΔ V_ dec^2 = Δ V_ obs^2 - 0.4^2. Calculated values aredisplayed in Table <ref>. The deconvolved line width is 10% to 20% lower than the observed line width and will be usedfor the determination of core's properties. lcccccMeasured and Derived Line Widths Associated to the SMA Cores 0ptCore Δ V_ obs Δ V_ dec Δ V_ intΔ V_ ntMachName ()() ()() Number(1) (2)(3) (4)(5) (6)SMA1 1.02± 0.15 0.91 ± 0.16 <0.65 0.89 1.8SMA2 0.68 ± 0.18 0.55 ± 0.22 <0.390.521.1SMA3 0.75 ± 0.25 0.63 ± 0.29 <0.450.611.3SMA4 0.94 ± 0.12 0.85 ± 0.13 <0.600.831.7SMA5 0.95 ± 0.17 0.86 ± 0.19 <0.610.841.8Columns (2), (3), and (4) correspond to line widths of the NH_3 outer left hyperfine.Δ V_ obs is the observed line width. Δ V_ dec is the deconvolved line width given byΔ V_ dec^2 = Δ V_ obs^2 - 0.4^2, where 0.4is the spectral resolution. Δ V_ decwill be used to calculate the physical parameters of the SMA cores. Δ V_ intis the intrinsic line width given by Equation <ref>.Column (5) is the non-thermal component of the NH_3 determined from Δ V_ dec, assumed to be thesame as the non-thermal component of H_2. In column (6), the Mach number is calculated using Δ V_ nt/Δ V_ th, where Δ V_ th = 0.48corresponds to the H_2 thermal line width. The intrinsic line width (Δ V_ int) of a line can be broadenedby the line optical depth. As will be discussed inSection <ref>, the optical depth of the outer left hyperfineis at least ∼2.4 at the SMA core positions. Fitting a Gaussianprofile to an optically thick or moderately thick line will result in an overestimation of the real velocity dispersion of the gas. To correct forthis effect, the following expression can be used <cit.>Δ V_ dec/Δ V_ int=1/√(ln 2) √(-ln[-1/τ ln(1 + e^-τ/2)]) ,where Δ V_ int profile is assumed to be Gaussian. Adopting τ = 2.4, the correction foroptical depth produces intrinsic line widths ∼30% smaller (see Table <ref>). We prefer to use Δ V_ dec over Δ V_ int in the following calculations because the uncertainty in τ is large (which propagate to Δ V_ int, virial masses, and virial parameters). However, we stress that large optical depths will make Δ V_ int smaller, which produces virial masses and virial parameters lower than the calculated with Δ V_ dec.If Δ V_ int is used, our final conclusions do not change, they are rather reinforced. § ANALYSIS§.§ NH_3 Optical Depth and Rotational TemperatureThe calculation of several physical parameters depends on the temperature of the medium. In this section, we define the temperature that will be used for the calculation of physical parameters in the following sections. The optical depth of the NH_3 hyperfine lines can be derived by takingthe ratio between the main and the satellite hyperfine components (assuming they have the samefilling factor), following, forexample, <cit.>: 1-e^-τ__(1,1,m)/1-e^-γτ__(1,1,m)=T_ b_(1,1,m)/T_ b_(1,1,s) ,where T_ b_(1,1,m) and T_ b_(1,1,s) are the brightness temperatures ofthe main and satellites components, respectively. The factor γ is the relative strength determined by the statistical weights.The value of γ is1/3.6 for the two inner satellites and 1/4.5 for the two outer satellites<cit.>.The rotational temperature, T_R, that characterizes the populationdistribution between the (1,1) and (2,2) states can be determined by<cit.>T_R = -41.5/ln[0.282(τ_(2,2,m)/τ_(1,1,m))] K ,where τ_(2,2,m) can be determined from1-e^-τ__(1,1,m)/1-e^-τ__(2,2,m)=T_ b_(1,1,m)/T_ b_(2,2,m) . The determination of the optical depth and rotational temperature becomes unreliable when the the intensity ratio between T_ b_(1,1,m) andT_ b_(1,1,s) approaches unity andimpossible when the main NH_3component is weaker than the satellites. As discussed earlier, NH_3profiles withthese odd characteristics are repeatedly seen inIRDC G028.23-00.19, especially in the clump MM1. Thus, optical depths androtational temperatures were determined only in a few positions and the meanvalues associated with the SMA cores will be reported. These mean values were determined inside the contour defined by the 5σ level in the dust continuum emission. The mean optical depthof the NH_3 (1,1) main componentis 11, demonstrating that ammonia emission is optically thick. The meanoptical depths of the NH_3 (1,1) satellites and NH_3 (2,2) main componentare moderately optically thick with values of 2.4 and 1.2, respectively.All these optical depths values should be treated as lower limits for theSMA cores since, at the center of the cores, exact values cannot bedetermined due to the extremely high optical depths. The mean rotational temperature is 13 K. This value corresponds to an upperlimit because at higher optical depths, the temperature is lower. <cit.> also estimate ∼13 K using lower angular resolution observations of NH_3.Thistemperature is within the uncertainties quoted by the other two methods usedfor temperature determination. Both theHerschel dust temperature and the temperature derived by the rotationaltechnique using CH_3OH are 12 ± 2 K <cit.>. This latter value, 12 ± 2 K, will be adopted for the rest of this work. §.§ Jeans Mass, Free Fall Time, and Dynamical Crossing Time of the MM1 ClumpHere we calculate the Jeans mass to compare with the measured core masses in the MM1 clump.The comparison between the free fall time and the dynamical crossing time can give additionalinformation on the dynamical state of the clump. If the fragmentation of a clump is governed by the Jeans instability, the initially homogeneousgas will fragment into smaller pieces defined by the Jeans length (λ_J) and the Jeans mass (M_J): λ_J = σ_ th(π/Gρ) , andM_J = 4πρ/3(λ_J/2)^3=π^5/2/6σ_ th^3/√(G^3ρ) ,where ρ is the mass densityandσ_ th is the thermal velocity dispersion (or isothermal sound speed, c_s) given byσ_ th = (k_ BT/μ m_ H)^1/2 .The thermal velocity dispersion is mostly dominated by H_2 and He, andit should bederived by using the mean molecular weight per free particle, μ=2.37 <cit.>. The thermalline width of the total gas (Δ V_ th = 2√(2ln 2) σ_ th) is 0.48 . Assuming a mass density given by a spherical clump of mass 1,500and radius of 0.6 pc <cit.>, the thermal Jeans length and mass of MM1 are0.14 pc and 2.2 , respectively. If we replace σ_ th by the observed velocitydispersion (σ_ obs) in the MM1 clump, we can obtain the turbulent Jeans mass. Using the NH_2D (1-1) line width of 1.9(σ_ obs = 0.81 ) observed on clump scales by <cit.>, we obtain a turbulent Jeans mass of the clump of 130 . The characteristic time for gravitational collapse (ignoring thermal pressure, turbulence, and magnetic fields), known as the free fall time, is given byt_ ff = √(3π/32Gρ) ,and the dynamical crossing time, which depends on the radius of the clump (R) and its gas velocitydispersion (σ_ obs), is given by t_ dyn = R/σ_ obs.The free fall time and the dynamical crossing time of MM1 are 2.0×10^5 yr and7.3×10^5 yr, respectively. A clump becomes gravitationally unstable if t_ ff < t_ dyn <cit.>.The t_ ff/t_ dyn ratio is 0.3, indicating gravitational contraction.§.§ Non-thermal Component of the SMA CoresThe level of turbulence in the cores can be compare with the turbulence suggested bysome high-mass star formation theories.Assuming that the NH_3 emission traces the velocity dispersion in theinterior of the SMA cores, the non-thermal component (Δ V_ nt)can be estimated using Δ V_ dec from the outer left hyperfine and the relationΔ V_ dec^2 = Δ V_ th^2 + Δ V_ nt^2. The thermal line width(Δ V_ th) for NH_3 is 0.18at 12 K. Because the non-thermal component is independent of the observed line from which is determined(except for molecular outflow/shock tracers), the determined Δ V_ nt from NH_3represents the turbulent motions from the total gas in the cores (mostly H_2). Δ V_ ntranges from 0.52 to 0.89 . The thermal line width of the total gas (for μ=2.37) is0.48 , leading to Mach numbers of (Δ V_ nt/Δ V_ th∼1.1-1.8). If Δ V_ intis used instead of Δ V_ dec, a lower non-thermal componentthat is just 0.7-1.3 times the thermal line width is derived. Δ V_ nt andMach numbers (Δ V_ nt/Δ V_ th) are given in Table <ref>.§.§ SMA Cores' Properties using Dust Continuum EmissionThe core mass is determined to search for the existence of high-mass cores and comparewith the thermal and turbulent Jeans masses.The mass of the cores was calculated by using the following expression:M_ core = ℝ F_ν D^2/κ_ν B_ν (T) ,where F_ν is the measured integrated source flux, ℝ is thegas-to-dustmass ratio, D is the distance to the source, κ_ν is the dustopacity per gram of dust, and B_ν is the Planck function at the dusttemperature T. A value of 0.9 cm^2 g^-1 is adopted forκ_1.3 mm, which corresponds to the opacity of dust grains with thin icemantles at gas densities of 10^6 cm^-3 <cit.>. A gas-to-dust mass ratio of 100 was assumed in this work. The number densitywas calculated by assuming a spherical core and using the molecular mass perhydrogen molecule (μ_ H_2) of 2.8 <cit.>. Masses, number densities,and surface densities for all cores are listed in Table <ref>.§.§ Dynamical State of Embedded Cores The dynamical state of the cores is assessed by determining the virial mass andthe virial parameter in order to compare with model predictions.The virial mass was evaluated according to theprescription of <cit.> (neglecting magnetic fields and externalpressure):M_ vir = 3(5-2n/3-n) Rσ^2/G ,where R is the radius of the core, σ is the velocity dispersionalong the line of sight, G is the gravitational constant, and n is a constant whose exact valuedepends on the density profile, ρ (r), as a function of the distancefrom the core center, ρ (r) ∝ r^-n.Equation <ref> can be written in more useful units as:M_ vir = k(R/[pc])(Δ V/[])^2 ,where Δ V is the line width and the value of k depends on the density profile <cit.>.For a uniform density profile, k is equal to 210. However, the uniform density profile is unlikelyand represents an upper limit for the virial mass. In fact, <cit.> and<cit.> find, on average, a radial profile index of 1.8 in high-massstar-forming regions. The same average value for the radial profile indexis also found on IRDC cores by <cit.>. A density profile withn=1.8, resulting in k=147,will be used in this work.The virial parameter (α) was determined by taking the ratio betweenequation <ref> and equation <ref>: α = M_ vir/M_ core. The calculated M_ vir andα are listed in Table <ref>.§.§ Uncertainties in the Determination of Physical ParametersThere are several sources of uncertainty in the mass determination ofstar-forming regions. This section discusses the uncertainties associatedto the parameters involved in the mass determination. lccccccccMeasured Properties of the SMA Cores 0ptCore Ra M_ core M_ core/M_Jb n(H_2) Σ Δ V_ dec M_ virc αName (pc) ()(×10^6 cm^-3) (g cm^-2) () () SMA1 0.030 156.8 1.91.1 0.91 3.7 ± 1.4 0.25 ± 0.15 SMA2 0.019 12... 6.02.2 0.55 0.85 ± 0.70 0.07 ± 0.06SMA3 0.024 8.5 3.9 2.11.0 0.63 1.4 ± 1.3 0.2 ± 0.2 SMA4 0.043 115.0 0.50.40 0.85 4.6 ± 1.5 0.41 ± 0.23 SMA5 0.027 8.4 3.8 1.50.78 0.86 2.9 ± 1.3 0.35 ± 0.22 Uncertainties for the radius (R), mass of the core(M_ core), volume density (n(H_2)), and surface density (Σ) are 10%,49%, 48%, and 47%, respectively (see discussion in Section <ref>).Uncertainty for Δ V_ dec is given in Table <ref>.M_ vir and α correspond to the virial mass and virial parameter,respectively. aThis radius (R) corresponds to half of the deconvolved sizequoted in Table <ref>. For SMA4, the radius is half of thesynthesized beam. bThe Jeans mass of the clump MM1 is 2.2(see Section <ref>). SMA2 is not embedded in MM1. cVirial mass estimated by assuming a density distribution∝ r^-1.8. The difficulty of characterizing interstellar dust makes κ_ν the least known parameter for determining the core mass. In the literature, the models of <cit.> are broadly used andhave been favored in multi-wavelength observations of star-forming regions<cit.>. The value of κ_ν used in this work (0.9cm^2 g^-1) corresponds to the so-called OH5 grain which is covered by a thin layer of ice mantle and coagulated at gas densities of10^6 cm^-3 <cit.>. The value of κ_ν ranges from0.7 to 1.05 if the values for thick layers and densities of 10^5 and 10^7cm^-3 are used instead. Assuming that the range of values is uniformlydistributed between 0.7 and 1.05, the standard deviation can be determinedby taking the size of the range (1.05-0.7 = 0.35) divided by √(12). The valueobtained is 0.1. Then, assuming 1-σ uncertainty, κ_ν would be 0.9 ± 0.1 (11% uncertainty). <cit.> constrain theoreticalmodels of dust opacity at 450 and 850 μm. The OH5 model is one of three supported by the observations. <cit.> determine several valuesfor κ__450 μ m and κ__850 μ m. Assuming that the range ofvalues they obtain is uniformly distributed between the extreme values,the standard deviation is 1.8 (28% of κ__450 μ m=6.4 from theOH5 model) and 0.34 (19% of κ__850 μ m=1.8 from the OH5 model).To be conservative, overall, a 1-σ uncertainty of 28% will beadopted for κ_ν=0.9 at 1.3 mm. Another difficulty in determining the core mass is the conversion factorthat relates the dust with the gas mass. The value of the canonical gas-to-massratio (ℝ) widely used is 100. Depending on the grain size,shape, and composition, determinations of the Galactic dust-to-gas massratio range between 70 and 150 <cit.>. In thiswork, the canonical value of 100 has been adopted. Assuming that the range ofℝ is uniformly distributed between 70 and 150, the standarddeviation is 23. Thus, the 1-σ uncertainty for the gas-to-massratio is 23 (23% of ℝ=100). The dust temperature and measured continuum flux have an uncertainty of 17%(see Section <ref>) and 15% (see Section <ref>),respectively.The major source of error in the kinematic distance method is the assumption of circular motions.Non-circular motions, e.g., cloud-cloud velocity dispersion (random motions), will lead to velocity perturbationsof about 5 . Using the rotation curve of<cit.>, IRDC G028.23-00.19 is located at 5.1 kpc. Varying thevelocity of G028.23-00.19 in ±5 , an uncertainty in the distanceof 10% is estimated. Another rotation curve places the IRDC at a distance of 4.6 kpc <cit.>. In this work, therotation curve of <cit.> will be adopted; the distance derivedby using the rotation curve of <cit.> agrees within the uncertainties. Due to their poor characterization, κ_ν and ℝ addan “intrinsic” uncertainty of 32% to the mass determination of cores. Depending onhow well the flux, distance, and temperature of the sources are determined,the uncertainty in the mass can be even higher than a factor 2. Becausethe SMA cores are observed with the same instrument, are at the samedistance, and have the same temperature, all the SMA cores havethe same mass uncertainty of 49%. Due to the lower dependenceon distance, the uncertainty for the volume density and surface density is 48% and47%, respectively. The uncertainty in the NH_3 line widths, and their effects on the virialmassand the virial parameter, will be different for each core. On average, linewidths, virial masses, and virial parameters have 30%, 60%, and 75% uncertainties. The uncertainties for each individual core are quoted inTable <ref>. § DISCUSSION §.§ Implications for High-Mass Star Formation in IRDC G028.23-00.19§.§.§ High-Mass Prestellar CoresRecent works have focused on determining the mass of prestellar cores embedded in massive cluster-forming clumps in order to test theoretical models <cit.>. However, the definition of a bonafide “prestellar, high-mass core” is rather vague. <cit.> suggest that in order to form an O type star through the direct collapse of a core, the core should have of the order of 100 . <cit.> suggest that prestellar, high-mass cores should have ∼100 Jeans masses. <cit.> simulate the formation a high-mass star of 9from a turbulent, virialized core of 100and 0.1 pc. It seems clear that a prestellar, high-mass core should have several tens of solar masses. So far, no prestellar cores have been detected with such a large mass. Indeed, follow-up observations of many suggested high-mass starless core candidates revealed molecular outflows or maser emission, irrefutable signs of star formation <cit.>. To be conservative, in this work we define a high-mass core asa core with a mass larger than ∼30 . This definition is consistent with the star formation efficiency of 30% derived by<cit.> in the Pipe dark cloud <cit.>, assuming that the initial mass function is a direct product of the core mass function as stated for the turbulent core accretion model, e.g., <cit.>. Interestingly, the prestellar core candidate MM2 embedded in the active high-mass star-forming region G11.92-0.61 <cit.> satisfies this condition and stands out as a good candidate to be a high-mass prestellar core.We note that in our definition of high-mass core, we have not considered that ∼80% of high-mass stars are found in binary systems <cit.> and that the majority of the systems contain pairs of similar mass.The cores found in IRDC G028.23-00.19 have gas masses ranging from 8 to 15 . We therefore find nohigh-mass cores that currently have the mass reservoir to form a high-mass star, in disagreement with the core accretion model. §.§.§ Fragmentation Dust and gas inIRDC G028.23-00.19 MM1 seem to be cold and quiescent.<cit.> find that massive clumps are more susceptible togravitational instabilities and evolve faster than low mass clumps, based on thelow t_ff/t_dyn ratio. The low virial parameter and low t_ff/t_dyn ratio indicate thatMM1 has started gravitational contraction and is not a transient object. Indeed,at high angular resolution,the first members of the future stellar cluster arerevealed through dust continuum emission. At thesensitivity observed with SMA, no molecular outflows are detected and the coresembedded in this massive clump seem to be in the prestellar phase.Considering that the current observational evidence supports the idea that the IRDC will formhigh-mass stars, the lack of high-mass prestellar cores (>30 ) have importantimplications in the formation of high-mass stars. <cit.> suggest that the heating produced by accreting low-mass starsin regions with surface densities ≥1 g cm^-2 can halt fragmentation byincreasing the Jeans mass. Although surface densities of the order of 1 g cm^-2 are foundat core scales in IRDC G028.23-00.19, only low temperatures are measured and there are no signs of activestar formation. A similar conclusion was also reported in <cit.> and <cit.> who studied a massiveclump in IRDC G28.34+0.06 and found low gas temperatures of ∼14 K toward dense cores.In order to have a Jeans mass of 30in IRDC G028.23-00.19 MM1, a temperature of 70 Kwould be needed.We note that heating from protostars seems unimportant even in G11.92-0.61,the high-mass star-forming region hosting a high-mass prestellar core candidate with a hotcore nearby<cit.>. The measured temperature in the high-mass prestellar core candidateis 17-19 K. Magnetic fields have also beensuggested as important to suppress fragmentation <cit.>. However, to date there are no measurements of the magnetic field in high-mass prestellar clump candidates,including IRDC G028.23-00.19. The Jeans mass in IRDC G028.23-00.19 MM1 is 2.2 . The observed SMA cores are 4–7 times more massive. The observational fact that core masses are largerthan the Jeans mass is inconsistent with competitive accretion models, unless (i) we arenot witnessing the initial fragmentation of the clump and the initial Jeans cores have had sufficienttime to accrete and grow to reach their current masses or (ii) these cores could still fragment in smallerobjects if higher angular resolution observations were available. We note that SMA observations are notsufficiently sensitive (5σ = 4.5 ) to detect Jeans cores, and we cannot exclude the possibilityof a core population with lower masses separated by the Jeans length (λ_J). The Jeans lengthin MM1 is 0.14 pc and the separation among the SMA cores is larger than 2λ_J. With the largest core mass of 15 , ourobservations disagree with the predictions from the turbulent core accretion model; high-massprestellar cores are not found.The turbulent Jeans mass in IRDC G028.23-00.19 MM1 (130 ) is much larger than the coremasses (9 times larger than the most massive core). Therefore, contrary to other slightly more evolvedIRDCs <cit.>, turbulence supported fragmentation does not seem to be the dominantprocess controlling the early stages of high-mass stars and cluster formation. Both the thermal Jeans mass andthe turbulent Jeans mass may be too simple descriptions to explain the fragmentation of massive clumps.A larger sample will be key to confirm if this is a general trend at the very early stages of high-mass starformation evolution, or if IRDC G028.23-00.19 is a uniquecase where neither thermal pressure nor turbulent pressure dominate the fragmentation of a massive cluster-formingclump. §.§.§ Turbulence The importance of turbulence can be further investigated by calculating the Mach number.As asserted by <cit.>, one of the most important premises of theturbulent core accretion model is that cores that will form high-mass starsare highly supersonically turbulent, leading to virial equilibrium(α∼1).NH_3 lines have narrow line widths in IRDC G028.23-00.19 (≲1.0 ).With Mach numbers (Δ V_ nt/Δ V_ th) of ∼1.1-1.8, the total gas istransonic and mildly supersonic. Although the gas may be slightly affected by turbulence,it is not highly supersonic (Mach numbers >5) as suggested by <cit.>, <cit.>, and <cit.>. If optical depth is taken into consideration for the outer left NH_3 hyperfine,the Δ V_ int would produce even a lower non-thermal component that would be just 0.7-1.3times the thermal line width (subsonic-transonic).The simple analytic models developed by<cit.> describe how a protostar gains mass from the collapse of a thermally supported core and from accretion of a turbulent clump. These models, based on statistical arguments, have a combination of “core-fed” and “clump-fed”components, which represent isothermal collapse and reduced Bondi accretion. The duration of the accretion is more important than the initial core mass in settling the final mass of stars. <cit.>suggest that the cores and the protostars that will become high-mass stars at the end of the cluster formation are born earlier than the low-mass counterparts from low-mass thermal cores. Stars become massive after accreting both thermal core gas and turbulent clump gas. Relatively low Mach numbers in the SMA cores in IRDC G028.23-00.19 may hint some similarityto the work of <cit.>. <cit.> suggests thatby the time that high-mass stars are identified, the core gas is turbulent because (i) its clump origin and (ii) thestar itself injects energy into the core (by heating, winds, and/or ionization). The SMA cores mayhave already accreted a substantial amount of material from the clump, increasing the mass and turbulence inagreement with <cit.>. However, we have no concrete evidence to support or refute that this scenariois occurring in IRDC G028.23-00.19. §.§.§ Dynamical State of the Cores The low turbulence level strongly affects the dynamics of the cores. Turbulent gas motions, and magneticfields as well, can provide additional support against self-gravity. Considering only turbulence, all SMA cores consistently show α<0.5, and are hence subvirial. These values suggest that in the absence of magnetic fields, the SMA cores are stronglysubvirial and simultaneously collapsingalong with the whole clump that has α = 0.3. This scenario of globalcollapse dominated by subvirialized structures is consistent with some models of competitive accretion <cit.>,and inconsistent with the turbulent accretion model <cit.>. According to competitive accretion scenarios,the low-intermediate mass cores in IRDC G028.23-00.19 could grow in mass by accreting gas from a reservoir of material in the molecular cloud that may not be bound to any core. SMA1 and SMA4 would be the primary candidatesto form high-mass stars in the future due to their position inside the clump, apparently near the center of the gravitational potential. On the other hand, the assumptions initially made in the turbulent core accretion model may not be applicable and may need to bereconsidered to represent better the observations. However, so far in the discussion, magnetic fields have been ignored andthey can add additional support against collapse. Recently, <cit.> obtained dust polarization information toward a sample of 14 massive star-forming regions. They found that magnetic fields in dense cores tend to follow the field orientationin their parental clumps. Therefore, they suggest that the magnetic field plays an important role in the fragmentation of clumps and the formation of dense cores. If magnetic fields areincluded in the virial equation, the following expression holds M_B,vir = 3 R/G(5-2n/3-n)(σ^2 + 1/6σ_A^2) ,where σ_A is the Alfven velocity, and n depends on the density profile (as in Equation <ref>).The Alfven velocity can be determined fromσ_A = B/√(4πρ) ,where B is the magnitude of the magnetic field and ρ is the mass density of each core, ρ = μ_H_2m_ Hn(H_2).To maintain virial equilibrium including magnetic fields(M_B,vir/M_ core=1), field strengths of 1.7, 1.6,0.56, and 1.2 mG are needed towards the dust cores embedded in MM1 (SMA1, SMA3, SMA4, and SMA5, respectively). If a uniform density is assumed, n=0 instead of n=1.8, the magneticfield magnitudes are ∼25% lower. On average, magnetic fields of ∼1-2 mG would be needed to maintain the SMAcores in virial equilibrium. Magnetic fields of these strengths are apparently consistent with observations of more evolved high-mass cores. <cit.> suggest that atdensities of ∼10^6 cm^-3 the most probable maximum strength forthe magnetic field is ∼1 mG. <cit.> indeed measure themagnitude of the magnetic field toward the high-mass star-forming core DR21(OH) and determine a value of 2.1 mG at a density of 10^7 cm^-3.Although it seems possible to obtain magnetic field magnitudes of ∼1 mGin high-mass star-forming cores <cit.>, so far, all estimation of field strengths arein cores with current evidence of star formation. No measurements have beenmade in prestellar sources. Therefore, although the non-magnetized versionof the turbulent core accretion model is not consistent with the observedproperties of the SMA cores in IRDC G028.23-00.19, which are candidates to form high-mass stars,the magnetized picture still needs to be tested. If observations indeed prove that the magnetized picture is feasible, it would imply that:(i) star formation efficiencies much larger than 30% would be needed to form high-mass starsin IRDC G028.23-00.19 <cit.> or (ii)IRDC G028.23-00.19 may never form high-mass stars, which would create a new puzzleand it would be necessary to understand why.The largest possible mass that can be supported by a magnetic field is given by <cit.>M_B = 16.2 (R_e/Z_e)^2(n( H_2)/10^6 cm^-3)^-2(B/mG)^3  ,where 2Z_e is the length of the symmetry axis and R_e is the radius normal to the axis of an ellipsoidal core.Assuming a spherical core (Z_e=R_e) and M_B equal to the mass of SMA1 (15 ), we find the minimummagnetic field (1.5 mG) that would suppress fragmentation. This magnetic field strength is practically the same that the field strength necessary to maintain the SMA1 core in virial equilibrium (as is the case also for the other cores embedded in the clump MM1). If lower magnetic field strengths are eventually measured, the cores may be prone to fragment.§ CONCLUSIONS We have imaged the IRDC G028.23-00.19 with the SMA (∼35 at 224 GHz) and JVLA (∼21 at 23 GHz). This IRDC hosts a massive (1,500 ), cold (12 K), andIR dark (at Spitzer 3.6, 4.5, 8.0, and 24 μm and at Herschel 70 μm) clump, which isone of the most massive, quiescent clumps known (MM1). After examining the dust continuumand the spectral line emission, we draw the following conclusions: 1. Using the SMA dust continuum emission, 5 dense cores are detected: 4 of them are embedded inMM1 (SMA1, SMA3, SMA4, and SMA5) and one is located in the northern part of the IRDC (SMA2). There are no cores with a mass larger than 15 . The lack of high-mass prestellar cores is indisagreement with the turbulent core accretion model. In order to form a high-mass star, the SMA1 coreneeds, at least, to double its mass at the same time the central star accretes material.The idea that high-mass stars can form without passing through a high-mass stage in the prestellar phaseis consistent with competitive accretion scenarios <cit.>. 2. The Jeans mass (2.2 ) is 4 to 7 times smaller than the core's masses. This is in disagreement with theprediction of competitive accretions models where the clumps fragment in objects with masses similarto the Jeans mass,unless the SMA cores have hadsufficient time to accrete and significantly increase their mass.3. Neither CO wing emission or SiO emission was detected indicating molecular outflows.Thus, confirming that, at the sensitivity of these observations, the SMA cores are starless. To ourknowledge, IRDC G028.23-00.19 MM1 is the most massive, cold clump that after interferometric observationsmaintains the status of “prestellar candidate."4. At core scales, the NH_3 line widths have some contribution from turbulence, with Machnumbers ranging from 1.1 to 1.8. The gas in the SMA cores is not highly supersonic as the turbulentcore accretion suggests. 5. By comparing the thermal and turbulent Jeans masses with the SMA core's masses, we find that theglobal fragmentation of the clump MM1 is dominated by neither thermal nor turbulent pressure. 6. Unless magnetic fields strengths are about 1-2 mG, the cores are strongly subvirialized (α < 0.5).The SMA cores are significantly below equilibrium and likely under fast collapse, which is consistent with coresthat can grow in mass.7. We finally conclude that in IRDC G028.23-00.19 we are witnessing the initial fragmentation of a massive, prestellar clump that will form high-mass stars. Whether the properties observed in IRDC G028.23-00.19 are unique or typical of the very early stages of high-mass star formation needs to be confirmed with a larger well-defined sample.P.S. gratefully acknowledge Jonathan B. Foster, Satoshi Ohashi, and Fumitaka Nakamura for helpfuldiscussions. P.S. thanks the comments from the anonymous referee.A.E.G. thanks FONDECYT N^ o 3150570. K.W. is supported by grant WA3628-1/1 of the GermanResearch Foundation (DFG) through the priority program 1573 (“Physics of the Interstellar Medium”). Data analysis was in part carried out on the open use data analysis computer systemat the Astronomy Data Center, ADC, of the National Astronomical Observatory of Japan. SMA, JVLA IDL, MIR, CASA§ DERIVATION OF THE MAXIMUM STELLAR MASS USING THE IMFAdopting the IMF from <cit.>, we have ξ(m) ∝ m^-1.3for 0.08≤ m < 0.5and ξ(m) ∝ m^-2.3 for m ≥ 0.5 , where m corresponds to the star's mass and ξ(m) dm is the number of stars in the mass interval m to m + dm.Assuming a range of stellar masses between 0.08 and 150 , we can imposethe total number of stars with m ≥ m_ max to be unity (in order to assure theformation of one high-mass star, the lowest value for m_ max should be 8 ), 1 = ∫_m_ max^150ξ(m)dm . The total mass in a stellar cluster, M_cluster, is given byM_cluster = ∫_0.08^150ξ(m)mdm . Combining equation <ref> and <ref>M_cluster = ∫_0.08^150ξ(m)mdm/∫_m_ max^150ξ(m)dm , and assuming a star formation efficiency, ϵ_ sfe, of 30% (M_ cluster=0.3× M_ clump), we canrelate m_ max with the clump mass as m_ max = (0.3/ϵ_ sfe17.3/M_ clump + 1.5× 10^-3)^-0.77 . To estimate the necessary mass in a clump to form a high-mass star, we can use thefollowing relationship M_ clump = 0.3/ϵ_ sfe17.3/(m_ max^-1.3 - 1.5×10^-3) . and making m_ max = 8we obtain 260 . [Alves et al.(2007)]Alves07 Alves, J., Lombardi, M., & Lada, C. J. 2007, , 462, L17[Avison et al.(2015)]Avison15 Avison, A., Peretto, N., Fuller, G. A., et al. 2015, , 577, A30[Battersby et al.(2010)]Battersby10 Battersby, C., Bally, J., Jackson, J. M., et al. 2010, , 721, 222[Beltrán et al.(2005)]Beltran05 Beltrán, M. T., Cesaroni, R., Neri, R., et al. 2005, , 435, 901[Beuther et al.(2013)]Beuther13 Beuther, H., Linz, H., Tackenberg, J., et al. 2013, , 553, A115[Benjamin et al.(2003)]Benjamin03 Benjamin, R. A., Churchwell, E., Babler, B. L., et al. 2003, , 115, 953[Bertoldi & McKee(1992)]Bertoldi92 Bertoldi, F., & McKee, C. F. 1992, , 395, 140 [Bondi(1952)]Bondi52 Bondi, H. 1952, , 112, 195[Bonnell & Bate(2006)]Bonnell06 Bonnell, I. A., & Bate, M. R. 2006, , 370, 488[Bonnell et al.(2004)]Bonnell04 Bonnell, I. A., Vine, S. G., & Bate, M. R. 2004, , 349, 735 [Bonnell et al.(2003)]Bonnell03 Bonnell, I. A., Bate, M. R., & Vine, S. G. 2003, , 343, 413[Bonnell et al.(2001)]Bonnell01 Bonnell, I. A., Bate, M. R., Clarke, C. J., & Pringle, J. E. 2001, , 323, 785[Bontemps et al.(2010)]Bontemps10 Bontemps, S., Motte, F., Csengeri, T., & Schneider, N. 2010, , 524, A18[Carey et al.(2009)]Carey09 Carey, S. J., Noriega-Crespo, A., Mizuno, D. R., et al. 2009, , 121, 76[Chambers et al.(2009)]Chambers09 Chambers, E. T., Jackson, J. M., Rathborne, J. M., & Simon, R. 2009, , 181, 360[Chini et al.(2012)]Chini12 Chini, R., Hoffmeister, V. H., Nasseri, A., Stahl, O., & Zinnecker, H. 2012, , 424, 1925[Clemens(1985)]Clemens85 Clemens, D. P. 1985, , 295, 422[Commerçon et al.(2011)]Commercon11 Commerçon, B., Hennebelle, P., & Henning, T. 2011, , 742, L9[Contreras et al.(2017)]Contreras17 Contreras, Y., Rathborne, J. M., Guzman, A., et al. 2017, , 466, 340[Contreras et al.(2016)]Contreras16 Contreras, Y., Garay, G., Rathborne, J. M., & Sanhueza, P. 2016, , 456, 2041[Crutcher et al.(2010)]Crutcher10 Crutcher, R. M., Wandelt, B., Heiles, C., Falgarone, E., & Troland, T. H. 2010, , 725, 466[Cyganowski et al.(2017)]Cyganowski17 Cyganowski, C. J., Brogan, C. L., Hunter, T. R., et al. 2017, arXiv:1701.02802[Cyganowski et al.(2014)]Cyganowski14 Cyganowski, C. J., Brogan, C. L., Hunter, T. R., et al. 2014, , 796, L2[Devereux & Young(1990)]Devereux90 Devereux, N. A., & Young, J. S. 1990, , 359, 42[Dirienzo et al.(2015)]Dirienzo15 Dirienzo, W. J., Brogan, C., Indebetouw, R., et al. 2015, , 150, 159[Duarte-Cabral et al.(2013)]Duarte13 Duarte-Cabral, A., Bontemps, S., Motte, F., et al. 2013, , 558, A125[Egan et al.(1998)]Egan98 Egan, M. P., Shipman, R. F., Price, S. D., et al. 1998, , 494, L199[Feng et al.(2016a)]Feng16a Feng, S., Beuther, H., Zhang, Q., et al. 2016a, , 592, A21[Feng et al.(2016b)]Feng16b Feng, S., Beuther, H., Zhang, Q., et al. 2016b, , 828, 100 [Foster et al.(2014)]Foster14 Foster, J. B., Arce, H. G., Kassis, M., et al. 2014, , 791, 108[Foster et al.(2011)]Foster11 Foster, J. B., Jackson, J. M., Barnes, P. J., et al. 2011, , 197, 25[Garay et al.(2007)]Garay07 Garay, G., Mardones, D., Brooks, K. J., Videla, L., & Contreras, Y. 2007, , 666, 309[Girart et al.(2013)]Girart13 Girart, J. M., Frau, P., Zhang, Q., et al. 2013, , 772, 69 [Girart et al.(2009)]Girart09 Girart, J. M., Beltrán, M. T., Zhang, Q., Rao, R., & Estalella, R. 2009, Science, 324, 1408 [Guzmán et al.(2015)]Guzman15 Guzmán, A. E., Sanhueza, P., Contreras, Y., et al. 2015, , 815, 130[He et al.(2015)]He15 He, Y.-X., Zhou, J.-J., Esimbek, J., et al. 2015, , 450, 1926[Henshaw et al.(2016)]Henshaw16 Henshaw, J. D., Caselli, P., Fontani, F., et al. 2016, , 463, 146[Henshaw et al.(2014)]Henshaw14 Henshaw, J. D., Caselli, P., Fontani, F., Jiménez-Serra, I., & Tan, J. C. 2014, , 440, 2860 [Ho & Townes(1983)]Ho83 Ho, P. T. P., & Townes, C. H. 1983, , 21, 239[Hoq et al.(2017)]Hoq17 Hoq, S., Clemens, D. P., Guzmán, A. E., & Cashman, L. R. 2017, , 836, 199 [Hoq et al.(2013)]Hoq13 Hoq, S., Jackson, J. M., Foster, J. B., et al. 2013, , 777, 157[Jackson et al.(2013)]Jackson13 Jackson, J. M., Rathborne, J. M., Foster, J. B., et al. 2013, , 30, e057[Kauffmann et al.(2013)]Kauffmann13 Kauffmann, J., Pillai, T., & Goldsmith, P. F. 2013, , 779, 185[Kauffmann & Pillai(2010)]Kauffmann10 Kauffmann, J., & Pillai, T. 2010, , 723, L7 [Kauffmann et al.(2008)]Kauffmann08 Kauffmann, J., Bertoldi, F., Bourke, T. L., Evans, N. J., II, & Lee, C. W. 2008, , 487, 993[Kim et al.(2010)]Kim10 Kim, G., Lee, C. W., Kim, J., et al. 2010, Journal of Korean Astronomical Society, 43, 9[Kong et al.(2016)]Kong16 Kong, S., Tan, J. C., Caselli, P., et al. 2016, , 821, 94[Kouwenhoven et al.(2005)]Kouwenhoven05 Kouwenhoven, M. B. N., Brown, A. G. A., Zinnecker, H., Kaper, L., & Portegies Zwart, S. F. 2005, , 430, 137[Kroupa(2001)]Kroupa01 Kroupa, P. 2001, , 322, 231 [Krumholz & McKee(2008)]Krumholz08 Krumholz, M. R., & McKee, C. F. 2008, , 451, 1082 [Krumholz et al.(2007b)]Krumholz07b Krumholz, M. R., Klein, R. I., & McKee, C. F. 2007, , 665, 478 [Krumholz et al.(2007a)]Krumholz07a Krumholz, M. R., Klein, R. I., & McKee, C. F. 2007, , 656, 959[Krumholz et al.(2006)]Krumholz06 Krumholz, M. R., McKee, C. F., & Klein, R. I. 2006, , 638, 369[Krumholz et al.(2005)]Krumholz05 Krumholz, M. R., McKee, C. F., & Klein, R. I. 2005, , 438, 332 [Lada & Lada(2003)]Lada03 Lada, C. J., & Lada, E. A. 2003, , 41, 57[Larson(2003)]Larson03 Larson, R. B. 2003, Galactic Star Formation Across the Stellar Mass Spectrum, 287, 65 [Li et al.(2015)]Li15 Li, H.-B., Yuen, K. H., Otto, F., et al. 2015, , 520, 518[Liu et al.(2014)]Liu14 Liu, X.-L., Wang, J.-J., & Xu, J.-L. 2014, , 443, 2264 [Longmore et al.(2011)]Longmore11 Longmore, S. N., Pillai, T., Keto, E., Zhang, Q., & Qiu, K. 2011, , 726, 97 [López-Sepulcre et al.(2010)]Lopez10 López-Sepulcre, A., Cesaroni, R., & Walmsley, C. M. 2010, , 517, A66 [Lu et al.(2015)]Lu15 Lu, X., Zhang, Q., Wang, K., & Gu, Q. 2015, , 805, 171[MacLaren et al.(1988)]MacLaren88 MacLaren, I., Richardson, K. M., & Wolfendale, A. W. 1988, , 333, 821[Matzner & McKee(2000)]Matzner00 Matzner, C. D., & McKee, C. F. 2000, , 545, 364 [McKee & Tan(2003)]McKee03 McKee, C. F., & Tan, J. C. 2003, , 585, 850 [Miettinen(2014)]Miettinen14 Miettinen, O. 2014, , 562, A3 [Molinari et al.(2010)]Molinari10 Molinari, S., Swinyard, B., Bally, J., et al. 2010, , 518, L100 [Mueller et al.(2002)]Muller02 Mueller, K. E.,Shirley, Y. L., Evans, N. J., II, & Jacobson, H. R. 2002, , 143, 469[Myers et al.(2013)]Myers13 Myers, A. T., McKee,C. F., Cunningham, A. J., Klein, R. I., & Krumholz, M. R. 2013, , 766, 97 [Myers(2014)]Myers14 Myers, P. C. 2014, , 781, 33[Myers(2011)]Myers11 Myers, P. C. 2011, , 743, 98[Ohashi et al.(2016)]Ohashi16 Ohashi, S., Sanhueza, P., Chen, H.-R. V., et al. 2016, , 833, 209[Ossenkopf & Henning(1994)]Ossenkopf94 Ossenkopf, V., & Henning, T. 1994, , 291, 943 [Perault et al.(1996)]Perault96 Perault, M., Omont, A., Simon, G., et al. 1996, , 315, L165 [Peretto & Fuller(2009)]Peretto09 Peretto, N., & Fuller, G. A. 2009, , 505, 405 [Pillai et al.(2011)]Pillai11 Pillai, T., Kauffmann, J., Wyrowski, F., et al. 2011, , 530, A118 [Pillai et al.(2006)]Pillai06 Pillai, T., Wyrowski, F., Menten, K. M., & Krügel, E. 2006, , 447, 929[Qiu et al.(2014)]Qiu14 Qiu, K., Zhang, Q., Menten, K. M., et al. 2014, , 794, L18[Ragan et al.(2015)]Ragan15 Ragan, S. E., Henning, T., Beuther, H., Linz, H., & Zahorecz, S. 2015, , 573, A119[Rathborne et al.(2016)]Rathborne16 Rathborne, J. M., Whitaker, J. S., Jackson, J. M., et al. 2016, , 33, e030[Rathborne et al.(2010)]Rathborne10 Rathborne, J. M., Jackson, J. M., Chambers, E. T., Stojimirovic, I., Simon, R., Shipman, R., & Frieswijk, W. 2010, , 715, 310[Rathborne et al.(2008)]Rathborne08 Rathborne, J. M., Jackson, J. M., Zhang, Q., & Simon, R. 2008, , 689, 1141 [Reid et al.(2009)]Reid09 Reid, M. J., Menten, K. M., Zheng, X. W., et al. 2009, , 700, 137[Rosero et al.(2016)]Rosero16 Rosero, V., Hofner, P., Claussen, M., et al. 2016, , 227, 25[Rosero et al.(2014)]Rosero14 Rosero, V., Hofner, P., McCoy, M., et al. 2014, , 796, 130[Rydbeck et al.(1977)]Rydbeck77 Rydbeck, O. E. H.,Sume, A., Hjalmarson, A., et al. 1977, , 215, L35[Sakai et al.(2015)]Sakai15 Sakai, T., Sakai, N., Furuya, K., et al. 2015, , 803, 70[Sakai et al.(2013)]Sakai13 Sakai, T., Sakai, N., Foster, J. B., et al. 2013, , 775, L31[Sakai et al.(2012)]Sakai12 Sakai, T., Sakai, N., Furuya, K., et al. 2012, , 747, 140[Sakai et al.(2008)]Sakai08 Sakai, T., Sakai, N., Kamegai, K., et al. 2008, , 678, 1049-1069[Sanhueza et al.(2013)]Sanhueza13 Sanhueza, P., Jackson,J. M., Foster, J. B., et al. 2013, , 773, 123 [Sanhueza et al.(2012)]Sanhueza12 Sanhueza, P., Jackson,J. M., Foster, J. B., et al. 2012, , 756, 60[Sanhueza et al.(2010)]Sanhueza10 Sanhueza, P., Garay,G., Bronfman, L., et al. 2010, , 715, 18 [Shipman et al.(2014)]Shipman14 Shipman, R. F., van der Tak, F. F. S., Wyrowski, F., Herpin, F., & Frieswijk, W. 2014, , 570, A51[Shirley et al.(2013)]Shirley13 Shirley, Y. L., Ellsworth-Bowers, T. P., Svoboda, B., et al. 2013, , 209, 2[Shirley et al.(2011)]Shirley11 Shirley, Y. L., Huard, T. L., Pontoppidan, K. M., et al. 2011, , 728, 143[Shu(1977)]Shu77 Shu, F. H. 1977, , 214, 488[Simon et al.(2006)]Simon06 Simon, R., Jackson, J. M., Rathborne, J. M., & Chambers, E. T. 2006, , 639, 227 [Smith et al.(2009)]Smith09 Smith, R. J., Longmore, S., & Bonnell, I. 2009, , 400, 1775[Tan et al.(2016)]Tan16 Tan, J. C., Kong, S., Zhang, Y., et al. 2016, , 821, L3[Tan et al.(2014)]Tan14 Tan, J. C., Beltrán, M. T., Caselli, P., et al. 2014, Protostars and Planets VI, 149[Tan et al.(2013)]Tan13 Tan, J. C., Kong, S., Butler, M. J., Caselli, P., & Fontani, F. 2013, , 779, 96[Traficante et al.(2015)]Traficante15 Traficante, A., Fuller, G. A., Peretto, N., Pineda, J. E.,& Molinari, S. 2015, , 451, 3089[Urquhart et al.(2014)]Urquhart14 Urquhart, J. S., Moore, T. J. T., Csengeri, T., et al. 2014, , 443, 1555[Vasyunina et al.(2014)]Vasyunina14 Vasyunina, T., Vasyunin, A. I., Herbst, E., et al. 2014, , 780, 85[Vuong et al.(2003)]Vuong03 Vuong, M. H., Montmerle, T., Grosso, N., et al. 2003, , 408, 581[Wang et al.(2014)]Wang14 Wang, K., Zhang, Q., Testi, L., et al. 2014, , 439, 3275 [Wang et al.(2012)]Wang12 Wang, K., Zhang, Q., Wu, Y., Li, H.-b., & Zhang, H. 2012, , 745, L30 [Wang et al.(2011)]Wang11 Wang, K., Zhang, Q., Wu, Y., & Zhang, H. 2011, , 735, 64[Wang et al.(2010)]Wang10 Wang, P., Li, Z.-Y., Abel, T., & Nakamura, F. 2010, , 709, 27 [Wang et al.(2006)]Wang06 Wang, Y., Zhang, Q., Rathborne, J. M., Jackson, J., & Wu, Y. 2006, , 651, L125 [Yanagida et al.(2014)]Yanagida14 Yanagida, T., Sakai, T., Hirota, T., et al. 2014, , 794, L10 [Zhang et al.(2015)]Zhang15 Zhang, Q., Wang, K., Lu, X., & Jiménez-Serra, I. 2015, , 804, 141[Zhang et al.(2014)]Zhang14 Zhang, Q., Qiu, K., Girart, J. M., et al. 2014, , 792, 116[Zhang et al.(2009)]Zhang09 Zhang, Q., Wang, Y., Pillai, T., & Rathborne, J. 2009, , 696, 268
http://arxiv.org/abs/1704.08264v1
{ "authors": [ "Patricio Sanhueza", "James M. Jackson", "Qizhou Zhang", "Andres E. Guzman", "Xing Lu", "Ian W. Stephens", "Ke Wang", "Ken'ichi Tatematsu" ], "categories": [ "astro-ph.GA" ], "primary_category": "astro-ph.GA", "published": "20170426180100", "title": "A Massive Prestellar Clump Hosting no High-Mass Cores" }
Corresponding author email: [email protected] Joint Quantum Institute and Department of Physics, University of Maryland, College Park, MD 20742, USA. Joint Quantum Institute and Department of Physics, University of Maryland, College Park, MD 20742, USA. Department of Electrical and Computer Engineering andthe Institute for Research in Electronics and AppliedPhysics, University of Maryland,College Park, Maryland 20742-3511, USA. Joint Quantum Institute and Department of Physics, University of Maryland, College Park, MD 20742, USA. Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas, Universidad Nacional Autónoma de México, Ciudad Universitaria, 04510, DF, México.Department of Electrical and Computer Engineering andthe Institute for Research in Electronics and AppliedPhysics, University of Maryland,College Park, Maryland 20742-3511, USA. Joint Quantum Institute and Department of Physics, University of Maryland, College Park, MD 20742, USA.Joint Quantum Institute, NIST and University of Maryland, Gaithersburg, Maryland 20899, USA Joint Quantum Institute and Department of Physics, University of Maryland, College Park, MD 20742, USA. We study the modification of the atomic spontaneous emission rate, i.e. Purcell effect, of ^87Rb in the vicinity of an optical nanofiber (∼500 nm diameter). We observe enhancement and inhibition of the atomic decay rate depending on the alignment of the induced atomic dipole relative to the nanofiber. Finite-difference time-domain simulations are in quantitative agreement with the measurements when considering the atoms as simple oscillating linear dipoles. This is surprising since the multi-level nature of the atoms should produce a different radiation pattern, predicting smaller modification of the lifetime than the measured ones. This work is a step towards characterizing and controlling atomic properties near optical waveguides, fundamental tools for the development of quantum photonics. Alignment-dependent decay rate of an atomic dipole near an optical nanofiber S. L. Rolston December 30, 2023 ============================================================================§ INTRODUCTION Neutral atoms coupled to optical waveguides is a growing field of research <cit.>. Atom-waveguide systems enable atom-light interaction for propagating light modes. This makes them promising tools for forthcoming optical technologies in the quantum regime, such as quantum switches <cit.>, diodes <cit.>, transistors <cit.>, and electromagnetically induced transparency and quantum memories <cit.>. In order to further any of these applications it is necessary to understand and control the effects of such waveguides on nearby atoms. Two important features result from having a waveguide with a preferential optical mode: the spatial variation of the electromagnetic field, and the change of its density of modes per unit frequency. One of the key atomic properties affected by both is the spontaneous emission rate <cit.>. Its modification is due to the change in the local vacuum field felt by the atom under the boundary condition imposed by the adjacent object, a phenomena known as Purcell effect <cit.>. When the symmetry of the free-space vacuum field is broken in the presence of an object, the alignment of the atomic dipole relative to the object also plays an important role on the atomic lifetime. For a given alignment the atom can couple more strongly (weakly) to the vacuum modes, producing an increase (decrease) of the spontaneous emission rate. The effect of waveguides on the spontaneous emission of nearby emitters has been a productive field of research <cit.>. Optical nanofiber (ONF) waveguides <cit.> are optical fibers with a diameter smaller than the wavelength of the guided field. Most of the electromagnetic field propagates outside the dielectric body of the ONF (in vacuum) in the form of an evanescent field, and its strong transversal confinement enables interactions with adjacent atoms. The nanofiber is adiabatically connected, through a tapered section, to a conventional single mode optical fiber, facilitating light coupling and readout. ONFs are up-coming platforms for photonic based quantum technologies due to the coupling efficiency of light, high surface quality at the nanometer scale, and simplicity and robustness of the fabrication procedure <cit.>. The electromagnetic mode confinement in an ONF provides a large atom-light coupling <cit.>, a feature that has been used for spectroscopy <cit.>, atomic cloud characterizations <cit.> and atom trapping <cit.>. It also allows the operation and control of memories <cit.>, and light reflectors <cit.> at the level of single photons. The presence of three polarization components of the propagating field gives rise to chiral effects and new possibilities for atom-light directional coupling including optical isolators <cit.>. The Purcell effect experienced by an emitter near an ONF has been studied in the past <cit.>. However, there are disagreements between predicted values for the decay rates (e.g. Refs. <cit.> and <cit.> differ by approximately 30% for atoms at the ONF surface), without direct experimental evidence that allows to validate one calculation over the other. Moreover, the possibility of controlling the atomic lifetime in the vicinity of an ONF by the position and alignment of the emitter has not been emphasized or shown experimentally.We measure the modification of the spontaneous emission decay rate of a ^87Rb atom placed near an ONF, in the time domain for different alignments of the induced atomic dipole, showing that the atomic lifetime can increase or decrease by properly preparing the atom. We present a theoretical description of the system, and perform both finite-differences time-domain (FDTD) and electromagnetic modes expansion calculations of the modification of the atomic decay rate. The FDTD numerical calculations considering a simple two-level atom show quantitative agreement with our experimental result. However, given the the multi-level structure of the atoms, their radiation patterns should differs from that of a linear dipole. The more isotropic pattern of our multilevel atom raises a puzzling question about the interpretation of the measured effects.Nonetheless, this study offers insight about the possibility of controlling atomic properties near surfaces for photonics, quantum optics and quantum information applications.This paper is organized as follows: Sec. <ref> explains the platform under study. The details of the experimental apparatus and the measurements procedure are in Sec. <ref>, and the results are discussed in Sec. <ref>. We present numerical calculations for the atomic decay rate under the experimental conditions in Sec. <ref> and a theoretical modeling of the system in Sec. <ref>. We comment on the role of the multilevel structure of real atoms in our experiment in Sec. <ref>. Sec. <ref> presents a quantitative comparison of the results to numerical simulations. Finally, we discuss the implications of this result in Sec. <ref>, and conclude in Sec. <ref>. § DESCRIPTION OF THE EXPERIMENTWe consider an ONF that only allows the propagation of the fundamental mode 𝐻𝐸_11.Excited atoms that are close to the nanofiber can spontaneously emit not only into free space, but also into the ONF mode, as sketched in Fig. <ref> (a). Our goal is to measure the modified spontaneous emission rate γ of an atom placed near it, compared to the free space decay rateγ_0.γ is the sum of the spontaneous emission rate of photons radiated into free-space (in the presence of the ONF) and into the ONF waveguide, i.e. γ(r)=γ_fs(r)+γ_wg(r), where all the quantities are a function of the atom position r. When the atom is placed far away from the ONF γ_wg→0 and γ→γ_0, recovering the free space scenario.The atomic decay rate can be calculated from Fermi's golden rule <cit.>. It states that the decay rate from an initial state |i⟩ to a final state |f⟩ is given by the strength of the interaction that mediates the transition, related to H_int,(r) and the density of final states per unit energy ρ(ϵ) asγ_i→ f (r) = 2π/ħρ(ϵ) |⟨ f| H_int(r)|i⟩|^2,where ħ is the reduced Plack constant. In our particular case H_int(r) = d·E(r), given by the transition dipole moment d and the electric field operator E(r).The effect of the ONF dielectric body on the decay rate of a nearby atom can be thought in two analogous ways <cit.>: it modifies the structure of the vacuum electric field; or it reflects the emitted field back to the atom. In both cases the electric field E(r)at the position of the atom is modified. This changes the interaction Hamiltonian H_int(r) along with the decay rate. The dot product between the atomic dipole moment and the electric field in Eq. (<ref>) depends upon their relative alignments, leading to alignment dependence of the atomic decay rate, because the ONF breaks the isotropy of the free space field. A linearly polarized optical field will drive a two-level atom along the direction of light polarization. After a scattering event, the light will leave the atom with the polarization and radiation pattern of a classical dipole aligned in such direction. By choosing the direction of light polarization we can align the radiating dipole relative to the ONF(see Fig. <ref> (b) and (c)). This allows us to observe the dependence of the atomic decay rate on the dipole orientation. Due to the tight transverse confinement of the light propagating through the ONF, the electric field has a significant vector component along the propagation axis, as well as perpendicular to it <cit.>. This enables an atomic dipole oscillating along the ONF to couple light into the guided mode. This is not the case for radiation in free space, where there is no radiated power along the dipole axis <cit.>.Modifications in the spontaneous emission rate change the atomic spectral width and can be measured in frequency space by doing precision spectroscopy <cit.>. However, the atomic spectrum is highly susceptible to broadening mechanisms such as Stark, Zeeman (DC and AC) and Doppler shifts, and van der Waals effects from the ONF dielectric surface. These broadenings increase systematic errors, making the measurement more challenging. Considering this, we perform a direct atomic lifetime measurement, i.e. in the time domain, to study the atomic decay rates. Atoms have to be relatively close to the ONF surface (less than λ/2π) when we probe them to see a significant effect. Two-color dipole traps, created by the evanescent field of an ONF, are a useful tool for trapping a large number of atoms close to the nanofiber <cit.>. However, the created potential minimum is usually too far from the ONF surface (typically ∼ 200 nm) to observe changes in the atomic radiative lifetime. Cold atoms that are free to move can get much closer to the ONF and spend sufficient time around it to be properly measured. To measure γ/γ_0 we overlap a cold cloud of atoms with a single mode ONF (see Fig.<ref> (a)). The atoms in the cloud are excited by a resonant probe pulse propagating perpendicularly to the nanofiber. After the pulse is suddenly turned off, spontaneously emitted light is collected and the photon-triggered signals are counted and histogrammed to get their temporal distribution, a technique known as time-correlated single photon counting (TCSPC) <cit.>. From the exponential decay of the temporal distribution of photons we measure the atomic lifetime τ=1/γ, directly related to the spontaneous emission rate. By detecting the spontaneously emitted light coupled into the ONF mode we are measuring only those atoms that are close enough to the nanofiber to couple light in. This allows us to obtain the modified spontaneous emission rate of atoms near the ONF surface. Note that the measured decay is the total decay rate γ, regardless of the mode used for the detection. The decay rates into different channels, in our case γ_fs and γ_wg, only determine the branching ratio of the total decay. We are interested in the effect of the atomic dipole alignment relative to the ONF. For this we externally drive the atomic dipole in a particular direction set by the polarization of the probe pulse. That polarization can be set to be linear in the direction along the ONF (horizontally polarized) or perpendicular to it (vertically polarized). When probing with horizontally polarized light the atomic dipoles for two-level atoms are oriented along z, (seeFig. <ref> (b)). For the case of a vertically polarized probe, the atomic dipoles are oriented along r on top and bottom, but along ϕ on each side, relative to the direction of propagation of the probe. In the vertical polarization case, we have a continuous distribution of dipole alignments, from dipoles along r to dipoles along ϕ (seeFig. <ref> (c)). § APPARATUS AND MEASUREMENTS PROCEDUREFigure <ref> (a) shows a schematic of the experimental apparatus. The ONF waist is 7 mm in length with an approximately 240 ± 20 nm radius, where the uncertainty represents the variation in any given fabrication of an ONF, as destructively measured by a scanning electron microscope and independently confirmed with non-destructive techniques <cit.>. Any given ONF is uniform to within 1% through its full length. We placed the ONF inside an ultrahigh vacuum (UHV) chamber. Inside the chamber, the ONF is overlapped with a cloud of cold ^87Rb atoms created from a magneto optical trap (MOT), loaded from a background gas of atoms released from a dispenser. The atoms are excited by pulses of a probe beam incident perpendicularly to the nanofiber and retroreflected to reduce photon-to-atom momentum transfer. These pulses are resonant with the F=2 → F'=3 transition of the D2 line and created with a Pockels cell (Conoptics 250-160) for a fast turn off,with a pulse extinction ratio of 1:170 in 20 ns. The on-off stage of the pulses is controlled with an electronic pulse generator (Stanford Research Systems DG645). The probe beam is a 7 mm 1/e^2 full-width collimated beam and kept at a saturation parameter s<0.05 to reduce atomic excitations during the off period (where s=I/I_sat=2(Ω/γ_0)^2, with I_sat=3.58mW cm^-2 the average saturation intensity for a uniform sub-level population distribution over all m_F in F=2, and Ω is the on-resonance excitation Rabi frequency). A linear polarizer with extinction ratio of 10^5:1 sets the probe polarization for driving the atomic dipoles along a particular direction. Any atoms in the cloud, close or far from the ONF, can be excited. The photons emitted into the nanofiber and those emitted into free space are independently collected with avalanche photodiodes (APDs, Laser Components COUNT-250C-FC, with less than 250 dark counts per second). The TTL pulses created from photons detected by the APDs are processed with a PC time-stamp card (Becker and Hickl DPC-230) and time stamped relative to a trigger signal coming from the pulse generator. We detect of the order of 10^-3 photons per probe pulse, consistent with considering atomic excitation probability, coupling into the ONF, power losses through band-pass filters and other optical elements, and detection efficiencies. The experimental cycle is described in Fig. <ref> (b). Acousto-optic modulators (AOMs) control the amplitude and frequencies of the MOT and repump beams. After the atomic cloud reaches steady state, the MOT cooling and repump beams are turned off with a fall time of less than 0.5 μs. The repump turns off 5 μs after the cooling beams to end with the maximum number of atoms in the F=2 ground state. We wait 300 μs until the AOMs reach maximum extinction. The atomic cloud constitutes a cold thermal gas around the ONF. The atom that interacts significantly with the nanofiber mode does so for approximately 1.5 μs (see atomic transit measurements in<cit.>). Because the atomic cloud expansion reduces the density of atoms, we limit the probing time to 1.7 ms.During this time we send a train of 200-ns probe pulses every 1.5 μs (approx. 1100 pulses). The probe beam is turned off and the MOT beams on. We reload the MOT for 20 ms and repeat the cycle. The average acquisition time for an experimental realization is around 5 hours, for a total of about 1 × 10^9 probe pulses.When atoms are around the nanofiber, they tend to adhere to it due to van der Waals attraction. After a few seconds of exposing the ONF to rubidium atoms, it becomes coated with rubidium and light cannot propagate through. In order to prevent this, we use approximately 500 μW of 750 nm laser (Coherent Ti:Saph 899) during the MOT-on stage of the experimental cycle to create a repulsive potential that keeps the atoms away from the nanofiber surface. When the MOT beams turn off so does the blue beam, allowing the probed atoms to get closer to the ONF. We have also seen that 500 μW of blue detuned beam is intense enough to heat the nanofiber and accelerate atomic desorption from the surface.Regarding the reduction of systematic errors, all the components of the magnetic field at the position of the MOT are carefully minimized. Using three sets of Helmoltz coils we reduce all residual field components to the level of 10 mG. This reduces low frequency quantum beats among different Zeeman sub-levels (with different m_F) that will shorten the apparent lifetime, and effects of atomic precession during the decay i.e. the Hanle effect. The intensity of the probe pulse is kept much lower than the saturation intensity, in order to reduce the atomic excitation when the pulse is nominally off. Another systematic error is the lengthening of the measured lifetime due to radiation trapping, which is the multiple scattering of a photon between different atoms <cit.>. Light trapped in the sample can re-excite atoms near the ONF, creating the appearance of a longer atomic lifetime. We confirm that the atomic density is low enough by measuring the lifetime of atoms emitting into free space as a control measurment, similar to the approach followed in <cit.>. The photons collected from emission into free space come mainly from atoms far away from the ONF surface, so their time distribution should give us the well known atomic lifetime τ _0=26.24(4) ns <cit.> in the absence of significant systematic error.We also consider the modification of the probe polarization after being scattered by the nanofiber. However, given the symmetry of the problem, a horizontally polarized incoming beam does not change polarization after interacting with the ONF. On the other hand, vertically polarized light changes polarization in the transversal plane of the nanofiber. This leads to a different arrangement of dipoles aligned along r and ϕ compare to a probe beam propagating unaltered, but does not change the overall distribution of dipoles aligned along both directions.§ LIFETIME MEASUREMENTSWe show the normalized time distribution of photons collected through the ONF mode in Fig. <ref>. The red circles correspond to the data obtained for vertically polarized probe light, and the blue squares, for the horizontal one. The curves are horizontally shifted apart for clarity. The bin size is 1 ns and we typically have a thousand counts per bin at the peak. The error bars come from the statistical error of the data collection. The solid black lines are the fits to an exponential decay, and the plot underneath shows the corresponding normalized residuals. The fitting function is A e^-γ t+O, where the amplitude A and the decay rate γ are the only fitting parameters, and the offset parameter O comes from the average value of the background at long times.We vary the starting and ending point of the fitting curves and verify that as long as we are in a region within one to three natural lifetimes after the pulse turns off, there is no significant dependence on the chosen data points. Varying the end points did not change the obtained decay rate by more than 0.1% . We consider only the fits with reduced χ^2 between 0.9 and 1.5. The averaged decay rates extracted from these fits are ⟨γ⟩_v/γ_0=1.088 ± 0.015 and ⟨γ⟩_h/γ_0=0.943 ± 0.014 for the atoms driven by vertical and horizontal polarized probe light respectively. For these two data sets, the average of the measured free-space decay rates is ⟨γ_0⟩/γ_0=0.989 ± 0.012, corresponding to atoms far away fro the ONF. The uncertainties represent the amount that the fitting parameter γ has to be varied to changes the χ^2 by plus or minus one. To study the systematic errors, we vary the magnetic field around 60 mG without observing a significant change on the decay rate. We also change the atomic density, and effects of radiation trapping bigger than the statistical errors appear when the density increases by a factor of three. The polarization of the probe pulse might also contribute with an error from a possible tilt of the ONF. We estimate this uncertainty to smaller than 10 mrad, and its effect into the total decay rate to be smaller than 0.1%. Even though our signal to background should be in principle limited by the extinction ratio of the probe pulse (better than 1:170 after 20 ns), the signal is small enough that dark counts from the APDs become important and are our ultimate limiting factor. In our case the dark counts are around 500 counts per second, a factor of two higher than the specifications. However, the obtained signal is enough to measure a difference in the modified spontaneous emission decay rate for the two probe polarizations of almost 10 standard deviations. § NUMERICAL SIMULATIONS FOR A TWO-LEVEL ATOM Most of the literature about modified spontaneous emission rates considers two-level atoms, i.e. classical dipoles, and we will follow that in this section.We will discuss the ramifications of our multi-level atoms in a later section.The radiative decay rate of an atom can be modified by the boundary conditions of the electromagnetic vacuum. We consider calculating this modification by two different approaches. Each is presumably equally valid and provides a different perspective and intuition of the problem <cit.>. In the first one, when the mode expansion of the electromagnetic field of the full space is known, the contribution of each mode to the spontaneous emission rate can be calculated using Fermi's golden rule (see Eq.(<ref>)). In particular, the mode expansion of the vacuum electromagnetic field for an ONF has an analytical expression <cit.>. A second strategy, useful when the modes are unknown or too complicated to compute analytically, is to solve the problem from classical electrodynamics. We calculate the modification of the radiated power of a classical dipole under equivalent boundary conditions, and take that to be the modification of the radiative decay rate of the atom <cit.>γ/γ_0=P/P_0,where γ and γ_0 are the modified and unmodified atomic decay rates respectively, and P and P_0 are the classically calculated modified and unmodified radiative power. The modification of the radiative spontaneous emission is explained by the effect of the electric field reflected from the boundaries to the dipole position.The latter approach allows us to develop an intuitive picture based on the idea that a two-level atom radiates as a linear dipole oscillating along the direction of the excitation field: When the atomic dipole is aligned along z and ϕ, parallel to the ONF surface, the radiated light can be reflected from the front and back interfaces created by the dielectric. These multiple reflections add at the position of the dipole affecting its emission. Because there is interference between reflections, the dipole radiation is sensitive to changes in the ONF radius. For these cases, the effect of the nanofiber can lead to enhancement or inhibition of the radiative spontaneous decay rate. On the other hand, for a dipole aligned along r we can expect little radiation reflected from the back surface of the ONF to the dipole, given the radiation pattern. For this case, the decay rate depends mainly on the distance between the atom and the ONF surface and only slightly on the nanofiber radius. An alternative viewpoint is to consider image charges. The atomic dipole induces an image dipole inside the ONF aligned in the same axis and in phase. They radiate more power than the normal atomic dipole, producing an enhancement of the decay rate for the distances we consider.Using the second strategy, based on a classical dipole, we calculate the modification of the atomic decay rate near an optical nanofiber as a function of the ONF radius and the distance from the atom to the nanofiber surface for different atomic dipole orientations. The calculation is performed numerically with a finite-difference time-domain (FDTD) algorithm <cit.>. It considers the wavelength of the emitted light, and the nanofiber index of refraction to be λ=780.241 nm andn=1.45367, respectively. The result of these calculations are shown in Fig. <ref> (a)-(c). It shows the modification of the atomic decay rate as a function of distance to the nanofiber and radius of the nanofiber for dipoles aligned (in cylindrical coordinates) in the (a) z-direction, (b) ϕ-direction, and (c)r-direction relative to the ONF (as sketched in Fig. (<ref>) (b) and (c)). We identify these rates as γ_z, γ_ϕ and γ_r respectively. The values of γ/γ_0 for this three cases are normalized so they are equal to one at large atom-surface distance.The atomic decay rate of an atomic dipole aligned along z, Fig. <ref> (a), is mostly inhibited close to the ONF surface compared to the free space decay rate, and it is highly dependent on the nanofiber radius. This is also true for dipoles aligned along ϕ, Fig. <ref> (b). For a dipole aligned along r, Fig. <ref> (c), the decay rate is enhanced and depends mostly on the distance between the atom and the ONF surface and not on the nanofiber radius.These results are compared with the calculations of the radiative lifetime using the electromagnetic field mode expansion (taken from Ref. <cit.>) in Fig. <ref> (d)-(f). We are interested in the limit where only the fundamental mode of the ONF can propagate, which is valid when the ONF radius is smaller than 284 nm for a wavelength of 780 nm. We observe that both calculations are qualitatively similar, but quantitatively different. The main discrepancy occurs at the fiber surface, where the mode expansion calculation seems to give a larger enhancement of the decay rate. The reason for the disagreement between both results is not understood. We have verified that other calculations based on finding the electric field at the position of the atom are in agreement with the mode expansion approach (compare Ref. <cit.> and <cit.>). On the other hand, our FDTD calculations are in agreement with previous results using the same method <cit.>. § THEORETICAL MODELThe modification of the atomic spontaneous emission is a function of the position of the atom. Because the atoms are not trapped at a particular position, the measured decay time is a spatial average of the atomic distribution around the ONF. The main factor that determines such distribution is the van der Waals interaction between the atoms and the ONF. Moreover, atoms emit into the ONF mode with different probabilities, depending on their relative orientation and proximity to the nanofiber, altering the average of the decay time. We describe the necessary physical considerations to model the spatial average of the atomic distribution and the dipole orientation average corresponding to a multilevel atom. §.§ Van der Waals potential At short distances from the ONF the atoms feel an attractive force due to the van der Waals and Casimir-Polder (vdW-CP) potentials. These two potentials can be smoothly connected in a simple equation written as <cit.>U_g,e(r)=-C^(g,e)_4/r^3(r+C^(g,e)_4/C^(g,e)_3),valid for the atomic ground (g) and excited (e) states, where C_3 and C_4 are the van der Waals and Casimir Polder coefficients of the atom interacting with the nanofiber. Using the procedure described in Ref. <cit.> we can obtain the value of these coefficients. For a ^87Rb atom in the 5S_1/2 ground level in front of an infinite half space fused silica medium, with index of refraction n=1.45,the van der Waals andCasimir-Polder coefficients areC^(g)_3=4.94 ×10^-49 J·m^3 and C^(g)_4=4.47 ×10^-56 J·m^4 respectively. For the 5P_3/2 excited state C_3^(e)=7.05 ×10^-49 J·m^3 andC^(e)_4=12.2 ×10^-56 J·m^4. The vdW-CP potential affects the experimental measurement in two different ways, by reducing the local density of atoms and by shifting the atomic levels.By sending probe pulses to the entire atomic cloud, we actually measure a spatial average over an ensemble of atoms with a density distribution ρ(r) at a radius r from the ONF surface. The vdW-CP attraction accelerates the atoms reducing the local densityaround the nanofiber, all of them initially in the ground state. Assuming only the radial degree of freedom and thermal equilibrium, a simple steady state density distribution can be obtained from the ideal gas law and energy conservation <cit.>, asρ(r)≈ρ_0 1/1-U_g(r)/E,where ρ_0 and E=3/2k_B T are the atomic density and the average (kinetic) energy of the atoms far away from the fiber, with atoms typically at T≈ 150 μK for our atomic cloud. By only considering U_g, we neglect the small fraction of atoms in the excited state. Fig. <ref> shows an example of this distribution (blue dotted line). This approximation agrees with previous analytical results <cit.>, and differs at most by 30% with Monte Carlo simulations of atomic trajectories <cit.>. The vdW-CP potentials also shift the atomic energy levels, affecting the probability to absorb the otherwise resonant probe beam as <cit.> p_abs(r)= N/1+s+4(δ(r)/γ_0)^2,where N is just a probability normalization factor and δ(r)=(U_e(r)-U_g(r))/2πħ is the detuning induced by the ONF,which for us is always red shifted. This distribution is plotted with a green dashed line in Fig. <ref>, neglecting s since we work in the low saturation limit. §.§ Coupling into the waveguide Another factor to consider when measuring the spontaneously emitted light into the ONF, is the fact that atoms that are closer to the nanofiber surface contribute more to the measured signal than those further away. This effect is characterized by the emission enhancement parameterα(r)=γ_wg(r)/γ_0.This factor is different from the more commonly used coupling efficiency β(r)=γ_wg(r)/γ(r) <cit.>. α(r) is proportional to the total number of photons emitted into the guided mode, and β(r) is the percentage of photons emitted into the mode relative to the total number of emitted photons. The difference between α and β becomes clear with the following example: When the coupling efficiency β(r) is very large, close to one, most of the emitted photons couple to the waveguide. However, the total number of photons emitted into the waveguide can still be close to zero if the total spontaneous emission where to be greatly inhibited, γ≪γ_0 (which is not our particular case). The amplitude of the signal measured through the guided mode is then represented by the emission enhancement parameter α(r).An analytical expression for γ_wg(r) can be found in the literature <cit.>, and it is proportional to the norm squared of the evanescent electric field, as expressed in Eq. (<ref>). For a single mode ONF, the spatial dependence of each component of the evanescent electric field is given by the sum of one or two modified Bessel function of the second kind K_i(qr') of order i=0,1,2; where r'=r_0+r, and r_0 and r are the ONF radius and the radial position from the ONF surface;q=√(β^2-k^2) is the transverse component of the wave vector, β is the field propagation constant in the ONF, and k=2 π/λ is the free space wavenumber. For our particular case of an ONF with radius r_0=230 nm propagating a field of wavelength λ=780 nm, q=0.56k. Provided that qr'>1 away from the ONF surface, we can simplify the calculation with the asymptotic expansion of K_i(qr')≈√(π/2qr')e^-qr', and approximate the spatial dependence of γ_wg(r) as <cit.>α(r)=γ_wg(r)/γ_0∝1/r_0+r e^-2(0.56 k r),approximation that has been tested against exact numerical results with excellent agreement. Any other constant pre-factor will not contribute to the final average after the appropriate normalization. Its spatial distribution is plotted as the red dotted and dashed line in Fig. <ref>. For our experimental parameters α≈0.2 at the ONF surface.Atomic dipoles aligned along different direction will couple to the guided mode with different strengths. We denote the emission enhancement parameter with a subindex to specify the alignment of the emitting dipole as α_i(r) with i ∈{z,ϕ,r}. It can be shown, from the calculation in Ref. <cit.>, that to a good approximation α_z≈α_ϕ≈α_r/3 for our ONF, independent on the radial position of the atom. The different coupling strength for atomic dipoles aligned along r comes from the fact that the radial component of the guided field is discontinuous and larger than the others due to the dielectric boundary conditions. Figure <ref> shows the spatial dependence of each one of the described distributions that affect the measured average decay rate. The black solid line in the plot represents the direct multiplication of them. This effectively describes the probability of observing a photon emitted from an atom at a position r into the ONF guided mode. Noticed that for a given ONF the only experimentally tunable parameter for the final distribution is the temperature of the atomic cloud. The atomic distribution is weakly dependent on the temperature in Eq.(<ref>). §.§ Averaged signal The measured signal is an average of atoms decaying at different rates. If these decay rates are close enough to each other the measured decay rate is approximately equal to the spatially averaged decay rate ⟨γ⟩. As a proof let us consider that the decay rates differ by a small quantity ϵ with a distribution g(ϵ), then the measured signal is given by∫ dϵ g(ϵ)e^-γ(1+ϵ)t=e^-γ t∫ dϵ g(ϵ) e^-γϵ t.For small ϵ (and short times) the exponential of order ϵ can be expanded in series, averaged, and exponentiate again, to obtaine^-γ(1+⟨ϵ⟩)t=e^-⟨γ⟩ t The measured decay rate ⟨γ⟩ is a spatial average of the actual decay rate weighted by the atomic density distribution ρ(r), the excitation probability p_abs(r) and the emission enhancement parameter α(r).⟨γ⟩=∫γ(r)α(r) ρ(r) p_abs(r)r dr/∫α(r) ρ(r) p_abs(r)r dr. In the particular case of driving the atoms with light polarized vertically, there are atomic dipoles oriented along r and ϕ (see Fig. <ref> (c)). In this case the proper α_i has to be taken into account to obtain the averaged signal.§ CONSIDERATION OF THE MULTI-LEVEL ATOMIC STRUCTURE The radiation pattern of a real atom differs from that of an ideal linear dipole. A multi-level atom (with more than one Zeeman sub-level m_F in the ground state) can decay to a ground state through a π- or σ-transition, when Δ m_F=0 or Δ m_F=± 1 respectively.In our case, we consider the quantization axis to be along the direction of the linear polarization of the probe, and π and σ are with respect to this quantization axis.We model the decay rate of a multi-level atom by an incoherent superposition of linear dipoles <cit.> that describes the real radiation pattern.An atom decaying through a π-transition is described by a linear dipole aligned along the probe polarization axis. An atom decaying through a σ-transition, is considered as a linear dipole rotating in the plane perpendicular to the probe polarization axis. The atomic decay rate depends on the norm squared of the dot product of the electric field and the dipole polarization (see Eq. (<ref>)). This implies that the decay rate of a rotating dipole (σ-transition) is the incoherent sum of the decay rate of two orthogonal linear dipoles oscillating in the rotation plane.All the information necessary to calculate the decay rate of different transitions of a real atom are then calculated from the decay rates of classical linear dipoles, in the spirit of Fig. <ref>. Tocalculate the total decay rate we need to know the branching ratios of the transitions. We denote the probability of decay through a π- or σ-transition as P_π and P_σ respectively. This depends on the state preparation of the atoms.The measured signal will be a spatial average over the contribution of such classical dipoles, weighted according to the coupling efficiency of each dipole into the waveguide. Considering this, the spatially averaged decay rate can be obtained using Eqs. (<ref>), (<ref>) and (<ref>), as⟨γ⟩_π=∫(γ_π(r)P_πα_π(r) +γ_σ(r)P_σα_σ(r)) ρ(r) p_abs(r)r dr/∫(P_πα_π(r) +P_σα_σ(r)) ρ(r) p_abs(r)r dr.where γ_i(r) with i ∈{π,σ} are obtained from the numerical simulation displayed in Fig. <ref>. The subscript π in the spatial average denotes the polarization of the probe beam that drives the atomic transition.§ COMPARISON BETWEEN THEORY AND EXPERIMENTWe can compare the measurements with the theoretical simulations calculating the average ⟨γ⟩/γ_0 by introducing the numerical values of γ(r)/γ_0, displayed in Fig <ref>, into Eq. (<ref>) for a particular ONF radius. It is important to notice that when we realize the experiment probing the atoms with horizontally polarized light we are measuring the spatially averaged decay rate for an atomic dipole aligned along z. For atoms driven by a vertically polarized probe pulse we measure a decay rate that is averaged over the different dipole alignments in addition to the spatial average (along ϕ and r as is is shown in Fig. <ref> (c)). This means that we can not separately measure the decay rate for dipoles aligned along r and ϕ.Figure <ref> shows a comparison between the measurements and the numerical simulations for a two-level atom. It shows the extracted atomic decay rates for both experimental configurations normalized by the free space one. The blue lines are the calculated value of the modified decay rate ⟨γ⟩_h/γ_0, corresponding to the probe beam horizontally polarized, to be compared with the experimental value (blue square). The blue dashed line corresponds to the FDTD calculation and the dotted one the mode expansion calculation. The red lines are the calculated values of the modified decay rate ⟨γ⟩_v/γ_0, corresponding to the probe beam vertically polarized, to be compared with the experimental value (red circle). The red dashed line corresponds to the FDTD calculation and the dotted one the mode expansion calculation. For each experimental realization we simultaneously measure the modified atomic decay rate of atoms close to the ONF and the free space decay rate from atoms in the MOT, where the great majority of them are away from the nanofiber. The black diamonds in Fig. <ref> are the measurements of the decay time into free space for the two different polarizations. When the measured decay rate into free space is off by more than few percent of the expected value, because for example an unexpected fluctuation of the atom density, the data collected through the ONF mode was discarded. For a multilevel atom we have to consider its initial state. During the period the probe beam is on, the atoms get pumped into a particular ground state. The steady state solution for optical pumping the F=2→ F'=3 transition of ^87Rb with linearly polarized light is biased towards the m_F=0 state (the fractional populations are approx. 0.04, 0.24, and 0.43 for |m_F| equal 2, 1, and 0 respectively). A π excitation (linearly polarized) of such initial state, will lead to probability P_π=0.55 of emitting π radiation and P_σ=0.45 of emitting as a σ radiation (circularly polarized). This effect has to be taken into account when calculating the averaged decay rate using Eq. (<ref>). Considering this, neither of our calculations, the FDTD nor the mode expansion, predict the measured values. They tend to be approximately equal to the natural decay rate for both polarizations. Even if the population distribution is different from the calculated optical pumping values, almost all population distributions tend towards producing more isotropic distributions than a linear dipole, as any amount of sigma polarization reduces the angular contrast.The fact that the radiation pattern of a real atom is more isotropic than the one of a linear dipole questions the idea of having any significant alignment dependent effect. The spontaneous decay of an atom to a particular ground states does not depend on the atomic dipole coherence induced by the excitation, but it depends only on the branching ratio of the decay of the excited state (proportional to the square of the Clebsch–Gordan coefficients). This is an outstanding puzzle: our measurements are in good agreement with a radiating linear dipole and in disagreement with the radiation expected for the actual multi-level atom.We note that the problem of the radiation of multi-level atoms with degenerate ground states near systems with a modified environment has not been thoroughly addressed or experimentally studied.§ DISCUSSIONThe numerical calculations have only the ONF radius and the atomic cloud temperature as adjustable parameters to match the experimental results, the former one showing a stronger effect in the expected decay rate. The fact that the decay time of an atomic dipole oriented along the fiber is strongly dependent on the nanofiber radius, gives us a possible method to measure this radius. From the error bars in the collected data,and considering the dependence on the nanofiber radius of the simulations, we determine the radius to be 235± 5 nm, based on the two-level atom FDTD calculation. This value is close the estimated ONF radius from the fabrication (240 nm ± 20) . Having fixed the nanofiber radius, there are no free parameters in the calculation of ⟨γ⟩_v/γ_0 (red dotted line in Fig. <ref>) other than the MOT temperature, set to be 150 μK. When we calculate the spatial average, as is explained in Sec.<ref>, we make a series of approximations. These include the van der Waals and Casimir-Polder coefficients calculated for a dielectric plane instead of a cylinder in Eq.(<ref>), an equilibrium distribution of the atomic density in Eq.(<ref>), and the asymptotic expansion of the guided mode for determiningthe shape of α(r) in Eq.(<ref>). However, when we vary all these quantities (C_3, C_4, ρ, and α) for 20% of their values, the final averaged decay rate does not change by more than the estimated error bars of the measurements. We can use the physical model presented in this paper to design other experiments. From Fig. <ref> (a)-(c) we can see that by positioning the atoms, for example at 50 nm from a 230 nm radius ONF, we can create atomic states can go from an approximately40 % enhancement of the spontaneous emission to 20% inhibition. This is possible by using only the atomic dipole alignment as a tuning knob for its coupling to the mode of the nanofiber and the environment. We can also use ONFs that support higher order modes to allow us to have a better control of the probe polarization, using the evanescent field of guided light, which can be used to drive different atomic dipole orientations, such as purely radial or azimuthal.The discrepancy between different calculations has yet to be understood. It is also necessary to develop better physical picture that explains the measured behavior for a real multi-level atom near an ONF. This problem bring fundamental questions that need to be revised and systematically study in the future, crucial for any future application of multi-level atoms coupled to optical waveguides.§ CONCLUSION We have experimentally observed the modification of the rate of spontaneous emission of atoms near an optical nanofiber and its dependence on the atomic dipole alignment. The experiment is implemented by placing an ONF at the center of a cold atomic cloud. A linearly polarized resonant probe pulse drives the atomic dipoles in a particular alignment. We measured the time distribution of spontaneously emitted photons into the ONF to obtain the atomic lifetime. The modification of the atomic lifetime is measured for different probe polarizations in order to show the dependence on the atomic dipole alignment of the spontaneous emission rate. A physical model of the experiment is also presented an used to perform a numerical calculation of the modification of the spontaneous emission rate. This shows a good quantitative agreement with the experimental measurements considering two-level atoms. Some basic physical aspect remain elusive, regarding the multi-level structure of a real atom. This work clearly demonstrates that there are open problems that need further investigation - a perhaps surprising conclusion given the fundamental nature of the simple problem of an atom radiating near a dielectric. A better understanding of the problem will allow us to extend this knowledge to more general cases. With this knowledge of how atomic properties change under different conditions, we can start implementing a new toolbox for precisely manipulating and controlling atoms coupled to optical waveguides. § DISCLAIMERAny mention of commercial products in a publication having NIST authors is for information only; it does not imply recommendation or endorsement by NIST.§ ACKNOWLEDGMENTSWe would like to thank H. J. Charmichael and A. Asenjo-Garcia for their valuable contributions to the discussion. This research was supported by the National Science Foundation of the United States (NSF) (PHY-1307416);NSF (CBET-1335857); NSF (PHY-1430094); and the USDOC, NIST, Joint Quantum Institute (70NANB16H168).
http://arxiv.org/abs/1704.08741v1
{ "authors": [ "Pablo Solano", "Jeffrey A. Grover", "Yunlu Xu", "Pablo Barberis-Blostein", "Jeremy N. Munday", "Luis A. Orozco", "William D. Phillips", "Steven L. Rolston" ], "categories": [ "quant-ph", "physics.atom-ph", "physics.optics" ], "primary_category": "quant-ph", "published": "20170427205206", "title": "Alignment-dependent decay rate of an atomic dipole near an optical nanofiber" }
empty Multiwavelength Studies of Young OB Associations Eric D. Feigelson – Dedicated to John Butcher, on the occasion of his 84-th birthday – ======================================================================== Millimeter wave (mmWave) communications have been considered as a key technology for future 5G wireless networks. In order to overcome the severe propagation loss of mmWave channel,mmWave multiple-input multiple-output (MIMO) systems with analog/digital hybrid precoding and combining transceiver architecture have been widely considered. However, physical layer security (PLS) in mmWave MIMO systems and the secure hybrid beamformer design have not been well investigated. In this paper, we consider the problem ofhybrid precoder and combiner design for secure transmission in mmWave MIMO systems in order to protect the legitimate transmission from eavesdropping. When eavesdropper's channel state information (CSI) is known, we first propose a joint analog precoder and combiner design algorithm which can prevent the information leakage to the eavesdropper. Then, the digital precoder and combiner are computed based on the obtained effective baseband channel to further maximize the secrecy rate. Next, if prior knowledge of the eavesdropper's CSI is unavailable, we develop an artificial noise (AN)-based hybrid beamforming approach, which can jam eavesdropper's reception while maintaining the quality-of-service (QoS) of intended receiver at the pre-specified level. Simulation results demonstrate that our proposed algorithms offer significant secrecy performance improvement compared with other hybrid beamforming algorithms. Millimeter wave (mmWave) communications, multi-input multi-output (MIMO), physical layer security (PLS), hybrid precoding, artificial noise (AN). § INTRODUCTION emptyMillimeter wave (mmWave) communications, which can provide orders-of-magnitude wider bandwidth than current cellular bands, has been considered as a key technology for future 5G wireless networks <cit.>. The smaller wavelength of mmWave signals enables a large antenna array to be packed in a small physical dimension at the transceiver ends. However, conventional full-digital precoder and combiner are realized using a large number of expensive radio frequency (RF) chains and energy-intensive analog-to-digital converters (ADCs), which are impractical in the mmWave communication systems. Recently, economic and energy-efficient analog/digital hybrid precoding and combining transceiver architecture has emerged as a promising solution in mmWave multiple-input multiple-output (MIMO) systems. The hybrid beamforming structure applies a large number of analog phase shifters (PSs) to implement high-dimensional analog beamformer and a small number of RF chains for low-dimensional digital beamformer to provide the necessary flexibility to perform multiplexing/multiuser transmission <cit.>. The existing hybrid beamforming designs can be categorized into i) codebook-based scheme in which the analog beamformer is selected from certain candidate vectors, such as array response vectors of the channel and discrete fourier transform (DFT) beamformers <cit.>-<cit.>; ii) codebook-free scheme in which the infinite resolution of PSs is assumed <cit.>, <cit.>. Currently, the codebook-based beamforming designs are more popular because of the less complexity and satisfactory performance due to the special structure of hybrid beamformer and the characteristic of mmWave MIMO channels.While existing hybrid beamforming designs focus on improving spectral efficiency of a point-to-point mmWave MIMO channel, however, the secrecy in a mmWave MIMO wiretap channel and the beamforming design for the secure transmission have not been well investigated. In recent years, physical layer security (PLS) has been identified as a promising strategy for secure wireless communications. Especially, beamforming technology becomes a powerful tool for enhancing the physical layer security in conventional MIMO systems <cit.>, <cit.>. With the spatial degrees of freedom (DoF) provided by multiple antennas, the transmitter can adjust its beamforming orientationto reduce/prevent the information leakage to eavesdroppers or generate artificial noise (AN) to jam potential eavesdroppers. However, the obtained results cannot be directly applied to mmWave MIMO systems due to the different propagation characteristics and the special hybrid beamforming architecture. Therefore, secure transmission in mmWave MIMO systems attracts new research interests <cit.>-<cit.>.In <cit.>, the network-wide PLS performance of a mmWave cellular network was investigated under a stochastic geometry framework. In <cit.>, the authors considered a mmWave system with the multi-input single-output (MISO) channel and presented two simple beamformer designs for the secure transmission. Based on this system model, in <cit.> the authors further introduced a new form of AN generation method depending on the propagation characteristics of the mmWave channel. Unfortunately, all those mentioned works focus on the comprehensive secrecy performance analysis rather than the beamformer design. More importantly, the presented simple beamformer designs are based on the conventional full-digital beamforming architecture, which are not practical in the mmWave MIMO systems comparing with the hybrid beamforming structure. To the best of our knowledge, the hybrid beamformer design for the secure transmission in the mmWave MIMO systems has not been studied yet.In this paper, we investigate hybrid beamformer design for the secure transmission and propose a novel codebook-based hybrid precoder and combiner design algorithm in order to protect the legitimate transmission from eavesdropping. In the case that eavesdropper's channel state information (CSI) is available, we first develop a joint analog precoder and combiner design algorithm which can prevent the information leakage to the eavesdropper. Then, the digital precoder and combiner are computed based on the obtained effective baseband channel to further maximize the secrecy rate. Then, when prior knowledge of the eavesdropper's CSI is unavailable, we introduce an AN-based hybrid beamforming approach, which can generate disturbance to the eavesdropper while maintaining the quality-of-service (QoS) of the intended receiver at the pre-specified level. Simulation results demonstrate the significant secrecy performance improvement of our proposed algorithms compared with other hybrid beamforming algorithms.§ SYSTEM MODEL AND PROBLEM FORMULATION §.§ System ModelWe consider a mmWave MIMO wiretap system as illustrated in Fig. <ref>, in which the legitimate transmitter Alice is equipped with N_a antennas and N^RF_a RF chains to simultaneously transmit N_s data streams to the legitimate receiver Bob, who is equipped with N_b antennas and N^RF_b RF chains. To ensure the efficiency of the communication with the limited number of RF chains, we assume N_RF = N_a^RF = N_b^RF andthe number of data streams is constrained as N_s ≤ N_RF. There also exists an eavesdropper Eve, who is equipped with N_e antennas and N^RF_e RF chains, N_s ≤ N^RF_e, and attempt to overhear the data transmission from Alice to Bob. The transmitted symbols are firstly processed by an N_RF× N_s baseband digital precoder 𝐅_BB, then up-converted to the RF domain via N_RF RF chains before being precoded with an N_a × N_RF analog precoder 𝐅_RF. While the baseband precoder 𝐅_BB enables both amplitude and phase modifications, the analog precoder 𝐅_RF is implemented by analog components like phase shifters and its elements are constrained to satisfy with constant magnitude. In the codebook-based precoder scheme, the analog beamformer is selected from a pre-specified codebook ℱ, i.e. a set of N_a × 1 vector with constant-magnitude entries. The normalized power constraint is given by 𝐅_RF𝐅_BB^2_F = N_s. Therefore, the transmit signal has a form of𝐱 = √(P)𝐅_RF𝐅_BB𝐬where P is the transmitted power and 𝐬 is the N_s × 1 symbol vector such that 𝔼{𝐬𝐬^H}= 1/N_s𝐈_N_s.Denote 𝐇_b∈ℂ^N_b× N_a as the channel matrix of Alice-to-Bob channel. Let 𝐧_b∈ℂ^N_b× 1 represent the noise vector of independent and identically distributed (i.i.d.) 𝒞𝒩(0,σ^2_b) elements. At Bob, the received signal𝐫_b = 𝐇_b 𝐱 + 𝐧_bare first processed by the analog combiner of Bob 𝐖_RF,b, then the digital combiner 𝐖_BB,b∈ℂ^N_RF× N_s. Let 𝐖_b ≜𝐖_RF,b𝐖_BB,b and 𝐅≜𝐅_RF 𝐅_BB for simplicity. Thus,the processed receive signal at Bob can be expressed as ŝ_b = √(P)𝐖^H_b 𝐇_b 𝐅𝐬 + 𝐖^H_b 𝐧_b,where the subscript “b” indicates Bob.The signal processing at Eve have the same procedure. Let 𝐇_e∈ℂ^N_e× N_a denote the Alice-to-Eve eavesdropping channel and let 𝐧_e∈ℂ^N_e× 1 represent the noise vectors of i.i.d.𝒞𝒩(0,σ^2_e) elements. Then, we have the processed receive signal at Eveŝ_e = √(P)𝐖^H_e 𝐇_e 𝐅𝐬 + 𝐖^H_e 𝐧_e,where the subscript “e” indicates Eve. §.§ MmWave MIMO Channel ModelThe mmWave MIMO channel can be described with the widely used limited scattering channel, in which the number of scatters is L, and each scatter is further assumed to contribute a single propagation path between the transmitter and the receiver. Under this model, the Alice-to-Bob channel matrix 𝐇_b can be expressed as 𝐇_b = √(N_a N_b/ρ_b)∑_l_b=1^L_bα_l_b𝐚_b(θ_l_b)𝐚^H_a(ϕ_l_b),where ρ_b denotes the average path-loss between Alice and Bob, and α_l_b is the complex gain of the l_b-th path and assumed to be Rayleigh distributed. The variables θ_l_b∈ [0,2π] and ϕ_l_b∈ [0,2π] are the l_b-th path's azimuth angles of departure or arrival (AoDs/AoAs) of the transmitter and the receiver, respectively. 𝐚_a(ϕ_l_b) and 𝐚_b(θ_l_b) are the antenna array response vectors at the transmitter and the receiver, respectively. In this paper, we assume the transmitter and the receivers adopt uniform linear arrays (ULA) for simplicity and then 𝐚_a(ϕ_l_b) and 𝐚_b(ϕ_l_b) are given by𝐚_a(ϕ_l_b)=1/N_a [1,e^j(2π/λ)dsin(ϕ_l_b),...,e^j(N_a-1)(2π/λ)dsin(ϕ_l_b)]^T,𝐚_b(ϕ_l_b)=1/N_b [1,e^j(2π/λ)dsin(θ_l_b),...,e^j(N_b-1)(2π/λ)dsin(θ_l_b)]^T, where λ is the signal wavelength and d is the distance between antenna elements. The Alice-to-Eve channel matrix can be written in a similar fashion as 𝐇_e = √(N_a N_e/ρ_e)∑_l_e=1^L_eα_l_e𝐚_e(θ_l_e)𝐚^H_a(ϕ_l_e)with different AoAs θ_l_e and AoDs ϕ_l_e. Due to the sparse property of mmWave MIMO channel, mmWave communication is usually considered as more secure than conventional MIMO systems since the generated beamformer is too narrow to be eavesdropped if the Eve is not close to Bob. However, it has been verified that rough surface and tiny building cracks can cause diffuse scattering in mmWave channel and the diffuse range increases as the wavelength shrinks <cit.>. Therefore, it is highly possible that different receivers share some common scatterers, as shown in Fig. <ref>. In other words, when Bob and Eve have similar AoDs with some common scatterers and Alice use those AoDs to transmit the secret information to Bob, Eve will have chance to receive very strong signal from Alice, resulting in severe information leakage. Therefore, it is easier for Eve to eavesdrop secret information in this scatterer-sharing model and we aim to investigate the physical layer security for the mmWave MIMO systems with the scatterer-sharing model. §.§ Problem FormulationIn the context of physical layer security, the secrecy rate is usually used as the performance metric:R_s = [log_2(𝐈_N_s+𝐒_b)-log_2(𝐈_N_s+𝐒_e)]^+where 𝐒_b= P/N_s𝐑^-1_n,b (𝐖_BB,b)^H (𝐖_RF,b)^H 𝐇_b 𝐅_RF𝐅_BB ×𝐅^H_BB𝐅^H_RF𝐇^H_b 𝐖_RF,b𝐖_BB,b, 𝐒_e= P/N_s𝐑^-1_n,e (𝐖_BB,e)^H (𝐖_RF,e)^H 𝐇_e 𝐅_RF𝐅_BB ×𝐅^H_BB𝐅^H_RF𝐇^H_e 𝐖_RF,e𝐖_BB,e, 𝐑_n,b =σ^2_b (𝐖_BB,b)^H (𝐖_RF,b)^H 𝐖_RF,b𝐖_BB,b, 𝐑_n,e =σ^2_e (𝐖_BB,e)^H (𝐖_RF,e)^H 𝐖_RF,e𝐖_BB,e.In the following, we carry out simulations to illustrate the security threaten in the mmWave MIMO systems. We assume both 𝐇_b and 𝐇_e are known to Alice and Bob and first consider the full-digital precoder and combiner scheme.Fig. <ref> shows the secrecy rate under beamforming designs with: 1) no PLS effort; 2) generalized singular value decomposition (GSVD)-based PLS approach <cit.>; 3) generalized eigen decomposition (GED)-based PLS approach <cit.>. It can be verified that the mmWave MIMO systems with ordinary (no PLS effort) beamforming design have notable information leakage in the high SNR range. Therefore, the legitimate transceiver needs to adopt beamforming with PLS efforts. In addition, unlike the conventional MIMO system,the GSVD approach is not as good as the GED approach under mmWave MIMO systems, which is because of the sparsity of the mmWave channel. Therefore, we will use GED effort with full-digital beamforming as our secrecy performance benchmark in the following simulation studies.In order to spotlight the impact of hybrid precoding and combining architecture considered in this paper, in Fig. <ref> we conduct the simulation using two representative hybrid beamforming algorithms: 1) codebook-based Spatially Sparse Precoding (SSP) <cit.>; 2) codebook-free PE-AltMin (PE) <cit.>. For the PLS effort, we use these two approaches to find secure hybrid beamformers by minimizing the Euclidean distance between hybrid beamformers and the GED-based full-digital secure beamformers. Fig. <ref> illustrates that the secrecy rate decreases dramatically when mmWave MIMO systems employ hybrid precoder and combiner, resulting in severe information leakage to the eavesdropper. Moreover, the gap between the hybrid beamforming algorithms and the full-digital benchmark is quite large, which awaits researchers' investigation. Inspired by the phenomenon illustrated in Figs. <ref> and <ref>, in this paper we aim to develop a PLS-based hybrid precoder and combiner design for the secure transmission. Specifically, let ℱ and 𝒲 denote the beamsteering codebooks for the analog precoder and combiner, respectively. If B^RF_t (B^RF_r) bits are used to quantize the AoD (AoA), ℱ and 𝒲 will consist of all possible analog precoding and combining vectors, which can be presented asℱ={𝐚_t(2 π i /2^B_t^RF) : i=1, …, 2^B_t^RF}, 𝒲={𝐚_r(2 π i /2^B_r^RF) :i=1, …, 2^B_r^RF}.The PLS-based hybrid precoder and combiner design problem can be formulated as follows:{𝐅^*_RF, 𝐅^*_BB,𝐖_RF,b^*, 𝐖_BB,b^* } = max R_s s.t.    𝐅_RF(:,l)∈ ℱ, ∀ l =1, …, N_RF , 𝐖_RF,b(:,l)∈ 𝒲, ∀ l =1, …, N_RF ,𝐅_RF𝐅_BB^2_F=N_s.In the next section, we divide the problem according to the availability of the eavesdropper's CSI. We first consider the scenario under which Alice and Bob know eavesdropper's channel 𝐇_e and develop our codebook-based hybrid beamformer design algorithm. Then, we study the case where the eavesdropper's CSI is not available and AN-aided methods are adopted in the hybrid beamforming design problem. § SECURE JOINT HYBRID BEAMFORMER AND COMBINER DESIGN §.§ Known Eavesdropper's Channel Under this condition, our algorithm starts with performing singular value decomposition (SVD) of 𝐇_e as𝐇_e = 𝐔_e Σ_e 𝐕_e^Hwhere 𝐔_e and 𝐕_e are unitary matrices, Σ_e is an N_e × N_adiagonal matrix of singular values arranged in a decreasing order. Due to the sparsity of the mmWave MIMO channel, 𝐇_e can be represented as𝐇_e = 𝐔_e Σ_e 𝐕_e^Hwhere Σ_e is a diagonal matrix whose elements are the first L_b nonzero singular values, 𝐔_e and 𝐕_e contain the most L_b left columns of𝐔_e and 𝐕_e, respectively. In an effort to prevent eavesdropping from Eve, Alice should elaborately design her precoder to avoid the AoD components of 𝐇_e in order to minimize Eve's reception. Thus, to implement the secure transmission, we propose to remove the AoDcomponents of 𝐇_e from 𝐇_b by𝐇_1 ≜𝐇_b (𝐈 - 𝐕_e 𝐕_e^H).By this operation, 𝐇_1 contains only the AoD componentsof 𝐇_b but almost no AoD component of 𝐇_e. After this initial processing, we successivelyselect the i-th (i=1,…, N_RF) analog precoder and combiner pair to maximize the corresponding channel gain while suppressing the co-channel interference. The joint design problem can be successively solved by the following optimization problem:{𝐰^*_i, 𝐟^*_i } =𝐰_i∈𝒲 𝐟_i∈ℱmax | 𝐰^H_i 𝐇_i 𝐟_i | , i=1,…, N_RF,and then assign them to the analog procoder and combiner matrices𝐅^*_RF(:,i)= 𝐟^*_i, 𝐖_RF,b^*(:,i)= 𝐰^*_i. Particularly, before executing the next iteration, we need to remove the components of previous determined precoders and combiners from the other data streams' channels such that similar analog precoders and combiners will not be selected by two different data streams. To achieve this goal, we let 𝐩_i and 𝐪_ibe the components of the determined analog precoder and combiner for the i-th data stream, respectively. When i=1, 𝐩_1=𝐟^*_1 and 𝐪_1=𝐰^*_1; when i>1, the orthogonormal component 𝐩_i and 𝐪_i can be obtained by a Gram-Schmidt procedure:𝐩_i =𝐟_i^* - ∑_j=1^i-1𝐩^H_j𝐟_i^* 𝐩_j,𝐩_i =𝐩_i/𝐩_i, i=2,…,N_RF; 𝐪_i =𝐰_i^* - ∑_j=1^i-1𝐪^H_j𝐰_i^⋆𝐪_j ,𝐪_i =𝐪_i/𝐪_i, i=2,…,N_RF.Then 𝐇_i+1 is updated for the next iteration by:𝐇_i+1 = (𝐈_N_b - 𝐪_i 𝐪^H_i) 𝐇_i (𝐈_N_a - 𝐩_i 𝐩^H_i). After determining the analog precoder 𝐅_RF^* and combiner 𝐖_RF,b^*, we can obtain the effective channel 𝐇_eff≜ (𝐖_RF,b^*)^H 𝐇_b 𝐅_RF^*. Then, an SVD-based baseband digital precoder is employed to further suppress the interference and maximize the sum-rate:𝐅^*_BB=𝐕̅(:,1:N_s),𝐖^*_BB,b=𝐔̅(:,1:N_s),where 𝐇_eff = 𝐔̅Σ̅𝐕̅^H. Finally, we normalize the baseband precoder 𝐅^*_BB by𝐅^⋆_BB=√(N_s)𝐅^*_BB/𝐅^*_RF𝐅^*_BB_F. This secure hybrid precoder and combiner design algorithm is summarized in Table <ref>.§.§ Unknown Eavesdropper's Channel Under this condition, by common intuition, low-power Alice-to-Bob transmission can improve the security by making the signal interception of Eve more difficult. Assume the Alice-to-Bob transmission needs to satisfy the QoS threshold R_γ, i.e. R_b ≥ R_γ, R_b=log_2(𝐈_N_s+𝐒_b). Thus, we can utilize the proposed algorithm in Table I with initialization 𝐇_1 = 𝐇_b to find the optimal precoder 𝐅^*_RF, 𝐅^*_BB and combiner 𝐖^*_RF,b, 𝐖^*_BB,b. Then, we can find the minimum transmit power P_s such that R_b ≥ R_γ can be satisfied. To further increase the security, the residual power P_AN≜ (P - P_s)^+ is utilized for generating AN.In this AN-based secure transmission, we assume N_RF>N_s and the transmit signal becomes𝐱 = 𝐅_RF(√(P_s)𝐅_BB𝐬 + √(P_AN)𝐅_BB,w𝐰 ),where 𝐰 represents the (N_RF-N_s) × 1 artificial noise vector, 𝔼{𝐰𝐰^H } = 1/N_RF-N_s 𝐈_N_RF-N_s, 𝐅_BB,w is the digital precoder associated with the AN 𝐰. The idea behind the AN-based PLS approach is that the generated AN should not degrade the reception of Bob. To ensure this principle with the obtained optimal combiners 𝐖_BB,b^* and 𝐖_RF,b^*, we should have𝐖_BB,b^*H𝐖_RF,b^*H𝐇_b 𝐅_RF^* 𝐅_BB,w= 0 ⇒𝐖_BB,b^*H𝐇_eff𝐅_BB,w=0where 𝐇_eff≜𝐖_RF,b^*H𝐇_b 𝐅_RF^* and 𝐇_eff = 𝐔̅Σ̅𝐕̅^H by SVD. If N_s < N_RF, with the optimal𝐖_BB,b^* obtained by 𝐖^*_BB,b=𝐔(:,1:N_s) as in (<ref>), the digital precoder 𝐅_BB,w for the AN should have a form of𝐅_BB,w^* = 𝐕(:, N_s+1: N_RF).Finally, we normalize the AN digital precoder as𝐅^⋆_BB,w=𝐅^*_BB,w/𝐅^*_RF𝐅^*_BB,w_F.This AN-based secure hybrid precoder and combiner design algorithm is summarized in Table <ref>.§ SIMULATION STUDIESIn this section, we illustrate the simulation results of the proposedhybrid precoder and combiner design for secure transmission in mmWave MIMO systems. Consider a mmWave MIMO wiretap system in which Alice, Bob and Eve are all equipped with N_a=N_b=N_e=192 ULA antennas. The antenna spacing of all ULAs is d=λ/2. The AoA/AoD is assumed to be uniformly distributed in [0,2π]. We assume there exists 20 scatterers from which Bob and Eve randomly select 3∼8 scatterers as propagation paths with a certain probability that Bob and Eve may share some common scatterers. For simplicity, the noise variance σ^2_b and σ^2_e are set to 1.The codebooks consisting of array response vectors as (<ref>) and (<ref>) with 128 angle resolutions are uniformly quantized in [0,2π]. We first consider the case that Eve's CSI is known and evaluate the proposed secure hybrid beamforming design algorithm in Table I. For the comparison purpose, the representative codebook-based Spatially Sparse Precoding (SSP) algorithm with PLS effort and without PLS effort is also studies. For the fairness, we only focus the codebook-based algorithm and will not consider the codebook-free algorithms. The full-digital beamforming design is also included as the performance benchmark. Fig. <ref> shows secrecy rate versus SNR for different beamforming design algorithms. The number of RF chains is set as N_RF=2, and the number of data streams is also N_s=2. It can be observed from Fig. <ref> that our proposed algorithm can significantly outperform SSP algorithm in terms of secrecy rate. This result illustrates that our proposed algorithm can more efficiently prevent eavesdropping in the mmWave MIMO systems. However, we also notice that our algorithm still has a notable gap between full-digital benchmark, which inspires us to pursue a more efficient hybrid algorithm in the future. In Fig. <ref>, a similar simulation is carried out with larger number of RF chainsN_RF=4 and larger number of data streamsN_s=4. The similar conclusion can be drawn. Next, we focus on the case that the eavesdropper's CSI is unknown. The number of RF chains is set as N_RF=8, and the number of data streams is N_s=4. Fig. <ref> shows the spectrum efficiency of Bob R_b, spectrum efficiency of Eve R_e, and the resulting secrecy rate R_s = (R_b-R_e)^+ with the required legitimate transmission QoS R_γ. It can be clearly observed that our proposed AN-based algorithm can dramatically reduce R_e by adding AN in the transmit signaling while maintaining the required QoS R_b. Therefore,our proposed AN-based hybrid beamforming design can implement secure transmission even when the CSI of the passive eavesdropper is unknown. In Fig. <ref>, we repeat the similar simulation by increasing the number of RF chains to N_RF=16. The secrecy performance becomes better since more spatial DoF can be utilized for generating AN to interrupt Eve's reception.§ CONCLUSION In this paper, we proposed a hybrid precoder and combiner design for secure transmission in a mmWave MIMO wiretap system. With the eavesdropper's CSI, a joint analog precoder and combiner design algorithm was proposed to prevent the information leakage to the eavesdropper. When eavesdropper's CSI is unknown, we developed an AN-based hybrid beamforming approach, which can jam the eavesdropper's reception while maintaining the required QoS of the intended receiver. Simulation results demonstrated the significant secrecy performance improvement of our propose algorithms compared with other hybrid beamforming algorithms. 99 Rappaport IA 13 T. S. Rappaport, S. Sun, R. Mayzus, H. Zhao, Y. Azar, K. Wang, G. N. Wong, J. K. Schulz, M. Samimi and F. Gutierrez “Millimeter wave mobile communications for 5G cellular: It will work!” IEEE Access, vol. 1, pp. 335-349, May 2013. Wang CL Z. Wang, M. Li, X. Tian, and Q. Liu, “Iterative hybrid precoder and combiner design for mmWave multiuser MIMO systems,” IEEE Commun. Lett., accepted to appear, 2017. SSP O. E. Ayach, S. Rajagopal, S. Abu-Surra, Z. Pi, and R. W. Heath Jr., “Spatially sparse precoding in millimeter wave MIMO systems,” IEEE Trans. Wireless Commun., vol. 13, no. 3, pp. 1499-1513, Mar. 2014.Alkhateeb TCOM 16 A. Alkhateeb and R. W. Heath Jr., “Frequency selective hybrid precoding for limited feedback millimeter wave systems,” IEEE Trans. Commun., vol. 64, no. 5, pp. 1801-1818, May 2016. Gao TVT 16 X. Gao, L. Dai, C. Yuen, and Z. Wang, “Turbo-like beamforming based on Tabu search algorithm for millimeter-wave massive MIMO systems,” IEEE Trans. Veh. Technol., vol. 65, no. 7, pp. 5731-5737, July 2016. PE AltMin X. Yu, J.-C. Shen, J. Zhang, and K. B. Letaief, “Alternation minimization algorithms for hybrid precoding in millimeter wave MIMO systems,” IEEE J. Sel. Topics Signal Process., vol. 10, no. 3, pp. 485-500, April 2016. Gao JSAC 16 X. Gao, L. Dai, S. Han, C.-L. I, and R. W. Heath Jr., “Energy-efficient hybrid analog and digital precoding for mmWave MIMO systems with large antenna arrays,” IEEE J. Sel. Areas Commun., vol. 34, no. 4, pp. 998-1009, April 2016.GED A. Khisti, and G. W. Wornell, “Secure transmission with multiple antennas I: the MISOME wiretap channel,” IEEE Trans. Inf. Theory, vol. 56, no. 7, pp. 3088-3104, July 2010.GSVD A. Khisti, and G. W. Wornell, “Secure transmission with multiple antennas-part II: the MIMOME wiretap channel,” IEEE Trans. Inf. Theory, vol. 56, no. 11, pp. 5515-5532, Nov. 2010. Wanghm AN2C. Wang and Hui-Ming Wang, “Physical layer security in millimeter wave cellular networks,” IEEE Trans. Wireless Commun., vol. 15. no. 8, pp. 5569-5585, Aug. 2016. Wanghm BF Y. Ju, H.-M. Wang, T.-X. Zheng, Q. Yang, Y. Zhang, Z. Li, P. Mu, and Q. Yin, “Multi-antenna secure transmissions for millimeter wave wiretap channels,” in Proc. IEEE Global Telecommunications Conference Workshop on Trusted Communications with Physical Layer Security (Globecom16 - Workshop - TCPLS), Washington, D.C., Dec. 2016.Wanghm AN1 Y. Ju, H.-M. Wang, and T.-X. Zheng, “Secure transmissions in millimeter wave systems,” IEEE Trans. Commun., accepted to appear, 2016.scatter sharing channel T. S. Rappapport, G. R. MacCartney, M. K. Samimi Jr., and S. Sun, “Wideband millimeter-wave propagation measurements and channel models for future wireless communication system design,” IEEE Trans. Commun., vol. 63, no. 9, pp. 3029-3056, Sept. 2015.
http://arxiv.org/abs/1704.08099v1
{ "authors": [ "Xiaowen Tian", "Ming Li", "Zihuan Wang", "Qian Liu" ], "categories": [ "cs.IT", "math.IT" ], "primary_category": "cs.IT", "published": "20170426132828", "title": "Hybrid Procoder and Combiner Design for Secure Transmission in mmWave MIMO Systems" }
emptyStochastic Quasi-Fejér Block-Coordinate FixedPoint Iterations With Random Sweeping II: Mean-Square and Linear ConvergenceContact author:P. L. Combettes, [email protected], phone:+1 (919) 515-2671. The work of P. L. Combettes was partially supported by the National Science Foundation under grant CCF-1715671. Patrick L. Combettes^1 andJean-Christophe Pesquet^2 ^1North Carolina State UniversityDepartment of MathematicsRaleigh, NC 27695-8205, [email protected]^2CentraleSupélec, Université Paris-SaclayCenter for Visual Computing92295 Châtenay-Malabry, [email protected]  ========================================================================================================================================================================================================================================================================================================================8mm Reference <cit.> investigated the almost sure weak convergence of block-coordinate fixed point algorithms and discussed their applications to nonlinear analysis and optimization. This algorithmic framework features random sweeping rules to select arbitrarily the blocks of variables that are activated over the course of the iterations and it allows for stochastic errors in the evaluation of the operators. The present paper establishes results on the mean-square and linear convergence of the iterates. Applications to monotone operator splitting and proximal optimization algorithms are presented.Keywords. Block-coordinate algorithm, fixed-point algorithm, mean-square convergence, monotone operator splitting, linear convergence, stochastic algorithm§ INTRODUCTION In <cit.>, we investigated the asymptotic behaviorof abstract stochastic quasi-Fejér fixed point iterations in a Hilbert spaceand applied these results to establish almost sure convergence properties for randomlyactivated block-coordinate, stochastically perturbed extensions ofalgorithms employed in fixed point theory, monotone operatorsplitting, and optimization. The basic property of the operatorsused in the underlying model was that of quasinonexpansiveness. Recall that an operator 𝖳→ with fixed point set 𝖳 isquasinonexpansive if (∀𝗓∈𝖳)(∀𝗑∈) 𝖳𝗑- 𝗓≤𝗑- 𝗓,and strictly quasinonexpansive if the above inequality is strict whenever 𝗑∉𝖳<cit.>. The fixed point problem under investigation in <cit.> was the following. Let (_i)_1≤ i≤ m be separable real Hilbert spaces andlet =_1⊕⋯⊕_m be their direct Hilbert sum. For every n∈, let𝖳_n→𝗑↦(𝖳_i,n 𝗑)_1≤ i≤ m be a quasinonexpansiveoperator where, for every i∈{1,…,m},𝖳_i,n→_i is measurable. Suppose that 𝖥=⋂_n∈𝖳_n≠. The problem is to find a point in 𝖥. In <cit.>, Problem <ref> was solved via the following block-coordinate algorithm. The main advantages of a block-coordinate strategy is to reduce the computational load and the memory requirements per iteration. In addition, our approach adopts random sweeping rules to select arbitrarily the blocks of variables that are activated at each iteration, andit allows for stochastic errors in the implementation of theoperators.At iteration n of Algorithm <ref>,λ_n∈ is a relaxation parameter, a_i,n an _i-valued random variable modeling some stochastic error in the application of the operator𝖳_i,n, and ε_i,n an {0,1}-valued random variable that signals the activation of the ith block𝖳_i,n of the operator𝖳_n. Almost sure weak and strong convergence properties of this scheme were established in <cit.>. In the present paper, we complement these results by proving mean-square and linear convergence properties for the orbits of (<ref>)under the additional assumption that each operator 𝖳_n inProblem <ref> satisfies the property(τ_n∈)(∀𝗓∈𝖳_n)(∀𝗑∈) 𝖳_n𝗑- 𝗓≤√(τ_n)𝗑-𝗓,which implies that 𝖳_n isstrictly quasinonexpansive and that 𝖳_n is a singleton. Our results appear to be the first of this kind regarding the block-coordinate algorithm (<ref>), even in the case of a single-block, when it reduces to the stochastically perturbed iteration[for n=0,1,…; ⌊[ x_n+1=x_n+λ_n(𝖳_nx_n+a_n-x_n), ]. ]special cases of which are studied in <cit.>.The problem we address is more precisely described as follows.Let (_i)_1≤ i≤ m be separable real Hilbert spaces, set =_1⊕⋯⊕_m, and let{τ_i,n}_1≤ i≤ m⊂. For every n∈, let𝖳_n→𝗑↦(𝖳_i,n 𝗑)_1≤ i≤ m be measurable and quasinonexpansive with common fixed point𝗑= (𝗑_i)_1≤ i≤ m, and such that(∀ n∈)(∀𝗑∈) 𝖳_n𝗑- 𝗑^2≤∑_i=1^m τ_i,n𝗑_i-𝗑_i^2.The problem is to find 𝗑. The proposed mean-square convergence results are the most comprehensive available to date for stochastic block-iterative fixed point methods at the level of generality and flexibility of Algorithm (<ref>). Special cases concerning finite-dimensional minimization problems involving a smooth function with restrictions in the implementation of(<ref>) are discussed in <cit.>.The remainder of the paper consists of 3 sections. In Section <ref>, we provide our notation and preliminary results. Section <ref> is dedicated to the mean-square convergence analysis of Algorithm <ref> and it discussesits linear convergence properties. Applications are presented inSection <ref>.§ NOTATION, BACKGROUND, AND PRELIMINARY RESULTSNotation.is a separable real Hilbert space withscalar product ··, associated norm·, Borel σ-algebra ℬ, and identity operator . The underlying probability space is(Ω,,). A -valued random variable is a measurablemap x(Ω,)→(,ℬ) <cit.>. The σ-algebra generated by a family Φ ofrandom variables is denoted by σ(Φ). Let ℱ=(_n)_n∈ be a sequence ofsub-sigma algebras ofsuch that (∀ n∈)_n⊂_n+1. We denote by ℓ_+(ℱ) the set of sequencesof -valued random variables (ξ_n)_n∈ such that, for every n∈, ξ_n is _n-measurable. We set(∀ p∈)ℓ_+^p(ℱ)= (ξ_n)_n∈∈ℓ_+(ℱ)∑_n∈ξ_n^p<. Let ℱ=(_n)_n∈ be a sequence of sub-sigmaalgebras ofsuch that(∀ n∈) _n⊂_n+1. Let (α_n)_n∈∈ℓ_+(ℱ),let (ϑ_n)_n∈∈ℓ_+(ℱ),let (η_n)_n∈∈ℓ_+(ℱ), andsuppose that there exists a sequence (χ_n)_n∈ insuch that χ_n<1 and(∀ n∈)α_n+1_n+ϑ_n≤χ_nα_n +η_nThen the following hold: * Set (∀ n∈) ϑ_n=∑_k=0^n(∏_ℓ=k+1^n χ_ℓ)ϑ_k_0 and η_n=∑_k=0^n(∏_ℓ=k+1^n χ_ℓ)η_k_0 (with the convention ∏_n+1^n·=1). Then(∀ n∈)α_n+1_0+ ϑ_n≤(∏_k=0^nχ_k) α_0+η_n *Suppose that α_0<and∑_n∈η_n<. Then ∑_n∈α_n< and ∑_n∈ϑ_n<.<ref>: Let n∈∖{0}. We deduce from(<ref>) thatα_n+1_n_n-1+ϑ_n_n-1 ≤χ_nα_n_n-1+η_n_n-1=χ_nα_n_n-1+η_n_n-1However, since _n-1⊂_n, we haveα_n+1_n_n-1= α_n+1_n-1. Therefore (<ref>) yieldsα_n+1_n-1≤χ_nα_n_n-1 +η_n_n-1-ϑ_n_n-1By proceeding by induction and observing that α_0 is _0-measurable, we obtain (<ref>).<ref>: We derive from (<ref>) that(∀ n∈)α_n+1+ϑ_n≤(∏_k=0^nχ_k)α_0+η_n= (∏_k=0^nχ_k) α_0+∑_k=0^n(∏_ℓ=k+1^n χ_ℓ)η_k.On the other hand, there existq∈ and ρ∈ such that, for every integern>q, χ_n<ρ and, therefore, α_n+1+ϑ_n ≤(∏_k=0^qχ_k)ρ^n-qα_0+∑_k=0^q(∏_ℓ=k+1^q χ_ℓ)ρ^n-qη_k +∑_k=q+1^n ρ^n-kη_k≤(∏_k=0^qχ_k)ρ^n-qα_0+ max{(∏_ℓ=k+1^qχ_ℓ/ρ^q-k)_0≤ k≤ q,1}∑_k=0^nρ^n-kη_k.Since ∑_n∈ρ^n< and ∑_n∈η_n<, it follows from standard properties of the discrete convolution that (∑_k=0^nρ^n-kη_k)_n∈ is summable. We then deduce from (<ref>) that ∑_n∈α_n< and ∑_n∈ϑ_n<. Thus, the inequalities(∀ n∈)ϑ_n≤∑_k=0^n(∏_ℓ=k+1^n χ_ℓ)ϑ_k=ϑ_nyield ∑_n∈ϑ_n<. Let ϕ→ be a strictly increasingfunction such that lim_t→ϕ(t)=, let(x_n)_n∈ be a sequence of -valued random variables, and let (_n)_n∈ be a sequence of sub-sigma-algebras ofsuch that(∀ n∈)σ(x_0,…,x_n)⊂_n⊂_n+1.Suppose that there exist 𝗓∈, (ϑ_n)_n∈∈ℓ_+(ℱ), (η_n)_n∈∈ℓ_+(ℱ), and a sequence (χ_n)_n∈ insuch that χ_n<1 and(∀ n∈)ϕ(x_n+1-𝗓)_n+ϑ_n ≤χ_nϕ(x_n-𝗓)+ η_nSet (∀ n∈) ϑ_n=∑_k=0^n(∏_ℓ=k+1^n χ_ℓ)ϑ_k_0 and η_n=∑_k=0^n(∏_ℓ=k+1^n χ_ℓ)η_k_0. Then the following hold: * (∀ n∈) ϕ(x_n+1-𝗓)_0+ϑ_n ≤(∏_k=0^nχ_k)ϕ(x_0-𝗓)+ η_n*Let p∈ and set ϕ=|·|^p. Suppose thatx_0∈ L^p(Ω,,;) and that∑_n∈η_n<. Then the following hold: * x_n-𝗓^p→ 0 and ∑_n∈ϑ_n<.* Suppose that (η_n)_n∈∈ℓ_+^1(ℱ). Then (x_n)_n∈ converges stronglyto 𝗓. We apply Lemma <ref><ref> with (∀ n∈) α_n=ϕ(x_n-𝗓).<ref>: See Lemma <ref><ref>.<ref>:Since L^p(Ω,,;) is a vector space<cit.> that contains x_0 and 𝗓, it also containsx_0-𝗓. Hence α_0=x_0-𝗓^p<, and it follows from Lemma <ref><ref> that ∑_n∈x_n-𝗓^p< and ∑_n∈ϑ_n<. Consequently,x_n-𝗓^p→ 0. <ref>: In view of (<ref>), since (η_n)_n∈∈ℓ_+^1(ℱ), it follows from <cit.> that(x_n-𝗓)_n∈ converges However, we derive from (<ref>) that there exists astrictly increasing sequence (k_n)_n∈ insuch that x_k_n-𝗓→ 0 <cit.>.Altogether x_n-𝗓→ 0Let(λ_n)_n∈ be a sequence insuch thatinf_n∈λ_n>0, and let (t_n)_n∈,(x_n)_n∈, and (e_n)_n∈ be sequences of-valued random variables. Further, let(_n)_n∈ be a sequence of sub-sigma-algebras of such that(∀ n∈)σ(x_0,…,x_n)⊂_n⊂_n+1.Suppose that the following are satisfied: *(∀ n∈) x_n+1=x_n+λ_n(t_n+e_n-x_n).*There exists a sequence (ξ_n)_n∈ insuch that∑_n∈√(ξ_n)<and (∀ n∈) e_n^2_n≤ξ_n.*There exist 𝗓∈𝖧, (θ_n)_n∈∈ℓ_+(ℱ),(ν_n)_n∈∈ℓ_+(ℱ), and a sequence (μ_n)_n∈ insuch thatμ_n<1 and (∀ n∈)t_n-𝗓^2_n+θ_n≤μ_n x_n-𝗓^2+ν_n Set(∀ n∈)χ_n=1-λ_n+λ_nμ_n+√(ξ_n)λ_n(1-λ_n+λ_n √(μ_n)) ϑ_n=∑_k=0^n [∏_ℓ=k+1^nχ_ℓ] λ_k(θ_k_0+(1-λ_k) t_k-x_k^2_0) η_n= ∑_k=0^n[∏_ℓ=k+1^nχ_ℓ]λ_k(ν_k_0+(1-λ_k+λ_k( 2√(ν_k)_0+√(μ_k)))√(ξ_k) +λ_kξ_k).Then the following hold: * (∀ n∈) x_n+1-𝗓^2_0+ ϑ_n≤(∏_k=0^nχ_k) x_0-𝗓^2+η_n*Suppose that x_0∈ L^2(Ω,,;) and that∑_n∈√(ν_n)<.Then the following hold: * x_n-𝗓^2→ 0.* ∑_n∈θ_n<. * ∑_n∈(1-λ_n)t_n-x_n^2<.* Suppose that (ν_n)_n∈∈ℓ_+^1/2(ℱ). Then (x_n)_n∈ converges stronglyto 𝗓. <ref>:Set λ=inf_n∈λ_n. Then(∀ n∈)χ_n≤ 1-(1-μ_n)λ +√(ξ_n)(1+√(μ_n)).Since μ_n<1 and limξ_n=0, we have χ_n<1. In addition, we derivefrom <ref>, <cit.>, and(<ref>) that(∀ n∈) x_n+1-𝗓^2=(1-λ_n) (x_n-𝗓)+λ_n (t_n-𝗓)^2+2λ_n(1-λ_n) (x_n-𝗓)+λ_n(t_n-𝗓)e_n+λ_n^2e_n^2=(1-λ_n)x_n-𝗓^2 +λ_nt_n-𝗓^2-λ_n(1-λ_n) t_n-x_n^2 +2λ_n(1-λ_n) (x_n-𝗓)+λ_n(t_n-𝗓)e_n+λ_n^2e_n^2Hence, <ref> implies that-4mm (∀ n∈)x_n+1-𝗓^2_n≤(1-λ_n)x_n-𝗓^2+ λ_nt_n-𝗓^2_n -λ_n(1-λ_n)t_n-x_n^2_n +2λ_n((1-λ_n)x_n-𝗓+λ_n √(t_n-𝗓^2_n))√(e_n^2_n) +λ_n^2e_n^2_n≤(1-λ_n)x_n-𝗓^2+λ_n( μ_nx_n-𝗓^2+ν_n -θ_n)-λ_n(1-λ_n)t_n-x_n^2_n +2λ_n((1-λ_n)x_n-𝗓+λ_n√(μ_n x_n-𝗓^2+ν_n))√(e_n^2_n) +λ_n^2e_n^2_nNow set (∀ n∈)ϑ_n=λ_nθ_n+ λ_n(1-λ_n)t_n-x_n^2_n κ_n=λ_nν_n+2λ_n^2√(ν_n)√(e_n^2_n)+λ_n^2e_n^2_n η_n=λ_nν_n+λ_n (1-λ_n+λ_n(2√(ν_n)+√(μ_n))) √(ξ_n)+λ_n^2ξ_n.It follows from <ref> that(∀ n∈) x_n+1-𝗓^2_n≤(1-λ_n+λ_nμ_n)x_n-𝗓^2 +2λ_n (1-λ_n+λ_n√(μ_n))x_n-𝗓√(e_n^2_n) -ϑ_n+κ_n≤(1-λ_n+λ_nμ_n)x_n-𝗓^2 +λ_n (1-λ_n+λ_n√(μ_n))(x_n-𝗓^2+1)√(e_n^2_n) -ϑ_n+κ_n≤χ_nx_n-𝗓^2 -ϑ_n+η_nThe result then follows by applying Lemma <ref><ref> with ϕ=|·|^2.<ref>:According to (<ref>), for every n∈,η_n=λ_nν_n+λ_n (1-λ_n+λ_n(2√(ν_n)+√(μ_n))) √(ξ_n)+λ_n^2ξ_n=λ_nν_n+(1-λ_n+√(μ_n))λ_n√(ξ_n) +2λ_n^2√(ξ_n) √(ν_n) +(λ_n√(ξ_n))^2 ≤ν_n+(1+√(μ_n))√(ξ_n) +2(sup_k∈√(ξ_k))√(ν_n) +(√(ξ_n))^2,where we have used the fact that λ_n∈ and Jensen's inequality. We deduce from (<ref>), (<ref>), and (<ref>) that ∑_n∈η_n<.Hence it follows from (<ref>) and Lemma <ref><ref> that x_n-𝗓^2→ 0 and that ∑_n∈ϑ_n<.In view of (<ref>), we obtain <ref> and <ref>.<ref>: In view of (<ref>), if(ν_n)_n∈∈ℓ_+^1/2(ℱ), then (η_n)_n∈∈ℓ_+^1(ℱ) and the strongconvergence claim follows from Lemma <ref><ref>. * Under the assumptions of Theorem <ref>,if ν_n≡ 0 and ξ_n≡ 0, then η_n≡ 0 and it follows from<ref> that (x_n-𝗓^2_0)_n∈converges linearly to 0.* The weak and strong almost sure convergences of asequence (x_n)_n∈ governed by <ref>and (<ref>) were established in <cit.> under different assumptions on (μ_n)_n∈, (ν_n)_n∈, and (e_n)_n∈.§ MEAN-SQUARE AND LINEAR CONVERGENCE OF ALGORITHM <REF>We complement the almost sure weak and strong convergence resultsof <cit.> on the convergence of the orbits of Algorithm <ref> by establishing mean-square and linear convergence properties.§.§ Main resultsThe next theorem constitutes our main result in terms ofmean-square convergence. For added flexibility, this convergencewill be evaluated in a norm |||·||| onparameterized by weights (ω_i)_1≤ i≤ m∈^m anddefined by(∀𝗑∈) |||𝗑|||^2= ∑_i=1^mω_i𝗑_i^2. Consider the setting of Problem <ref> andAlgorithm <ref>, and let (_n)_n∈ be a sequence of sub-sigma-algebras ofsuch that(∀ n∈)σ(x_0,…,x_n)⊂_n⊂_n+1.Assume that the following are satisfied: * inf_n∈λ_n>0.*There exists a sequence (α_n)_n∈ insuch that∑_n∈√(α_n)< and, for every n∈, a_n^2_n≤α_n.*For every n∈, ℰ_n=σ(ε_n) and _n are independent.*For every i∈{1,…,m}, 𝗉_i=[ε_i,0=1]>0.Then the following hold: *Let (ω_i)_1≤ i≤ m∈^m be such that(∀ i∈{1,…,m})τ_i,n<ω_i𝗉_i max_1≤ i≤ mω_i𝗉_i=1,set(∀ n∈)ξ_n=α_nmax_1≤ i≤ m ω_i μ_n= 1-min_1≤ i≤ m(𝗉_i-τ_i,n/ω_i),and define(∀ n∈)χ_n=1-λ_n(1-μ_n)+√(ξ_n)λ_n(1-λ_n+λ_n√(μ_n)) η_n=∑_k=0^n [∏_ℓ=k+1^nχ_ℓ]λ_k(1-λ_k+λ_k√(μ_k) +λ_k√(ξ_k))√(ξ_k).Then(∀ n∈)∑_i=1^mω_i x_i,n+1-𝗑_i^2_0≤(∏_k=0^nχ_k)(∑_i=1^mω_i x_i,0-𝗑_i,0^2)+ η_n *Suppose that x_0∈ L^2(Ω,,;) and (∀ i∈{1,…,m}) τ_i,n<1. Then x_n-𝗑^2→ 0 andx_n→𝗑 <ref>: We are going to apply Theorem <ref> in the Hilbert space (,|||·|||) defined by (<ref>). Set(∀ n∈)t_n=(x_i,n+ε_i,n(𝖳_i,n x_n-x_i,n))_1≤ i≤ mande_n=(ε_i,na_i,n)_1≤ i≤ m.Then it follows from (<ref>) that(∀ n∈)x_n+1=x_n+λ_n( t_n+e_n-x_n),while <ref> implies that(∀ n∈)|||e_n|||^2_n≤|||a_n|||^2_n≤α_nmax_1≤ i≤ mω_i=ξ_n.We note that it also follows from <ref> that ∑_n∈√(ξ_n)<. Now define(∀ n∈)(∀ i∈{1,…,m})𝗊_i,n×𝖣→ (𝗑,ϵ)↦𝗑_i-𝗑_i+ ϵ_i(𝖳_i,n 𝗑-𝗑_i)^2.Then, for every n∈ and every i∈{1,…,m}, the measurability of 𝖳_i,n implies that of the functions(𝗊_i,n(·,ϵ)) _ϵ∈𝖣. However, for every n∈, <ref> asserts that the events([ε_n= ϵ])_ϵ∈𝖣 constitute an almost sure partition of Ω and are independentfrom _n, while the random variables(𝗊_i,n(x_n,ϵ)) _1≤ i≤ m ϵ∈𝖣are _n-measurable. Therefore, we derive from<cit.> that-6mm (∀ n∈)(∀ i∈{1,…,m})x_i,n+ε_i,n(𝖳_i,n x_n-x_i,n)-𝗑_i^2_n 53mm=𝗊_i,n(x_n,ε_n)∑_ϵ∈𝖣 1_[ε_n=ϵ]_n 53mm=∑_ϵ∈𝖣𝗊_i,n(x_n,ϵ) 1_[ε_n=ϵ]_n 53mm=∑_ϵ∈𝖣1_[ε_n=ϵ]_n𝗊_i,n(x_n,ϵ) 53mm=∑_ϵ∈𝖣[ε_n=ϵ] 𝗊_i,n(x_n,ϵ)Combining this identity with (<ref>), (<ref>), <ref>, (<ref>), and (<ref>) yields(∀ n∈) |||t_n-𝗑|||^2_n=∑_i=1^mω_i (x_i,n+ε_i,n(𝖳_i,n x_n-x_i,n)-𝗑_i^2  | _n)=∑_i=1^mω_i ∑_ϵ∈𝖣[ε_n=ϵ] 𝗊_i,n(x_n,ϵ) =∑_i=1^mω_i (∑_ϵ∈𝖣,ϵ_i=1[ε_n=ϵ] 𝖳_i,n x_n-𝗑_i^2+ ∑_ϵ∈𝖣, ϵ_i=0[ε_n=ϵ] x_i,n-𝗑_i^2)=∑_i=1^mω_i𝗉_i 𝖳_i,n x_n-𝗑_i^2+ ∑_i=1^mω_i(1-𝗉_i) x_i,n-𝗑_i^2≤(max_1≤ i≤ mω_i𝗉_i) ∑_i=1^m𝖳_i,n x_n- 𝗑_i^2+∑_i=1^mω_i(1-𝗉_i) x_i,n-𝗑_i^2=|||x_n-𝗑|||^2+ 𝖳_nx_n- 𝗑^2 -∑_i=1^mω_i𝗉_i x_i,n-𝗑_i^2≤|||x_n-𝗑|||^2+ ∑_i=1^m (τ_i,n-ω_i𝗉_i) x_i,n-𝗑_i^2=∑_i=1^mω_i(1+τ_i,n/ω_i-𝗉_i)x_i,n- 𝗑_i^2≤(1-min_1≤ i≤ m(𝗉_i -τ_i,n/ω_i))|||x_n-𝗑|||^2Altogether, properties <ref>–<ref> of Theorem <ref> are satisfied with (∀ n∈)θ_n=ν_n=0.On the other hand, it follows from (<ref>) and (<ref>) that μ_n<1. Hence, we derive from Theorem <ref><ref> that(∀ n∈)|||x_n+1-𝗑|||^2_0≤(∏_k=0^nχ_k) |||x_0-𝗑|||^2+ η_n <ref>: Consider <ref> when(∀ i∈{1,…,m}) ω_i=1/𝗉_i. The convergence then follows from the inequalities (∀𝗑∈)min_1≤ i≤ m𝗉_i|||𝗑 |||≤𝗑≤max_1≤ i≤ m𝗉_i|||𝗑|||and Theorem <ref><ref>.§.§ Linear convergenceAs an offspring of the results in Section <ref>, we obtain the following perturbed linear convergence result.Consider the setting of Problem <ref> andAlgorithm <ref>, suppose that <ref>-<ref> in Theorem <ref> are satisfied, and define (χ_n)_n∈and (η_n)_n∈ as in(<ref>), wheremax_1≤ i≤ mτ_i,n<1 and(∀ n∈)ξ_n=α_nmin_1≤ i≤ m𝗉_i μ_n= 1-min_1≤ i≤ m𝗉_i (1-τ_i,n).Then(∀ n∈)x_n+1- 𝗑^2_0≤max_1≤ i≤ m𝗉_i/min_1≤ i≤ m𝗉_i(∏_k=0^nχ_k) x_0-𝗑^2+ η_nIn view of (<ref>), the claim follows fromTheorem <ref><ref> applied with(∀ i∈{1,…,m}) ω_i=1/𝗉_i. Let us now make some observations to assess the consequences of Corollary <ref> in terms of bounds on convergencerates, and the potential impact of the activation probabilities of the blocks (𝗉_i)_1≤ i≤ mon them. Let us consider the case when α_n≡ 0,i.e., when there are no errors. Set(∀ n∈)χ_n=1-λ_nmin_1≤ i≤ m𝗉_i(1-τ_i,n).Then we derive from (<ref>) and (<ref>) that (∀ n∈)x_n+1- 𝗑^2_0≤max_1≤ i≤ m𝗉_i/min_1≤ i≤ m𝗉_i(∏_k=0^nχ_k)x_0- 𝗑^2Since (<ref>) yields sup_n∈χ_n<1,a linear convergence rate is thus obtained. For simplicity, let us further assume that the blocks are processeduniformly in the sense that (∀ i∈{1,…,m}) 𝗉_i=𝗉. Setχ=1-inf_n∈(λ_n (1-max_1≤ i≤ mτ_i,n))∈.Then(∀ n∈)χ_n=1-λ_n𝗉( 1-max_1≤ i≤ mτ_i,n) ≤ 1-(1-χ)𝗉.When 𝗉=1, the upper bound in (<ref>) on the convergence rate is minimal and equal to χ. This is consistent with the intuition that frequently activating the coordinates should favor the convergence speed as a function of the iteration number. On the other hand, activating the blocks less frequently induces a reduction of the computational load per iteration. In large scale problems, this reduction may actually be imposedby limited computing or memory resources. In Algorithm <ref>, the cost of computing 𝖳_i,n(x_1,n,…,x_m,n) is on the average 𝗉 times smaller than in the standard non block-coordinate approach. Hence, if we assume that this costis independent of i and the iteration number n, N iterations of the block-coordinate algorithm have the same computational cost as 𝗉N iterations of a non block-coordinate approach. In view of (<ref>), let us introduce the quantityϱ(𝗉)=-ln(1-(1-χ)𝗉)/𝗉to evaluate the convergence rate normalized by the probability 𝗉 accounting for computational cost. Under the above assumptions, (<ref>) yields(∀ n∈)∏_k=0^nχ_k≤exp(-ϱ(𝗉)𝗉(n+1)).Elementary calculations show that, if χ≠ 0,-1-χ/lnχ≤ϱ(𝗉)/ϱ(1)≤ 1.For example, if χ>0.2, then ϱ(𝗉)/ϱ(1)∈[0.49,1].This shows that, for values of χ not too small, the decrease in the normalized convergence rate remains limited with respect to a deterministic approach in which all the blocks are activated. This fact is illustrated by Figure <ref>, where the graph of ϱ is plotted for several values of χ. Let us consider the special case in which, for everyi∈{1,…,m}, τ_i,n≡τ_i. Then (<ref>) becomes(∀ n∈)χ_n=1-λ_nmin_1≤ i≤ m𝗉_i(1-τ_i).Now, let us further assume that, at each iteration n, only one of the operators (𝖳_i,n)_1≤ i≤ m is activated randomly. In this case, ∑_i=1^m𝗉_i=1 and choosing(∀ i ∈{1,…,m})𝗉_i = (1-τ_i)^-1/∑_j=1^m (1-τ_j)^-1.leads to a minimum value of χ_n. § APPLICATIONSIn variational analysis, commonly encountered operators include resolvent of monotone operators, projection operators, proximityoperators of convex functions, gradient operators, and various compositions and combinations thereof <cit.>. Specific instances of such operators used in iterative processes which satisfy property (<ref>) can be found in<cit.>.In this section we highlight a couple of examples in the area of splitting methods for systems of monotone inclusions. The notation is that used in Problem <ref>. In addition, let 𝖠→ 2^ be a set-valued operator. We denote by𝖠=𝗑∈0∈𝖠𝗑 the set of zeros of 𝖠 and by𝖩_𝖠=(+𝖠)^-1 the resolvent of 𝖠. Recall that, if 𝖠 is maximally monotone, then 𝖩_𝖠 is defined everywhere on and nonexpansive <cit.>. In the particular case when 𝖠 is the Moreau subdifferential ∂𝖿 of a proper lower semicontinuous convex function𝖿→, 𝖩_𝖠 is the proximity operator _𝖿 of 𝖿<cit.>.For every i∈{1,…,m}, let 𝖠_i→ 2^ be a maximally monotone operator, and consider the coupled inclusion problemfind 𝗑 =(𝗑_i)_1≤ i≤ m∈ such that0∈𝖠_1𝗑_1+𝗑_1-𝗑_20∈𝖠_2𝗑_2+𝗑_2-𝗑_34mm⋮ 0∈𝖠_m-1𝗑_m-1+𝗑_m-1-𝗑_m0∈𝖠_m𝗑_m+𝗑_m-𝗑_1.For instance, in the case when each 𝖠_i is the normalcone operator to a nonempty closed convex set,(<ref>) models limit cycles in the method of periodic projections <cit.>.Another noteworthy instance is when m=2, 𝖠_1=∂𝖿_1, and 𝖠_2=∂𝖿_2, where 𝖿_1 and 𝖿_2 are proper lower semicontinuous functions fromto . Then (<ref>) reducesto the joint minimization problem(𝗑_1,𝗑_2)∈^2𝖿_1(𝗑_1)+𝖿_2(𝗑_2)+ 1/2𝗑_1-𝗑_2^2,studied in <cit.>. Now set𝖠𝗑↦(𝖠_1𝗑_1,…,𝖠_m𝗑_m) and𝖡𝗑↦(𝗑_1-𝗑_2,𝗑_2-𝗑_3, …,𝗑_m-𝗑_1).Then it follows from <cit.> that𝖠 is maximally monotone. On the other hand, 𝖡 is linear, bounded, and monotone since (∀𝗑∈)𝖡𝗑𝗑=𝖡𝗑^22≥ 0.It is therefore maximally monotone <cit.>. Altogether,𝖠+𝖡 is maximally monotone by <cit.>. In addition, suppose that each 𝖠_i is strongly monotone with constantδ_i∈. Then 𝖠 is strongly monotone with constant min_1≤ i≤ mδ_i, and so is 𝖠+𝖡. We therefore deduce from<cit.> that it possessesa unique zero 𝗑, which isthe unique solution to (<ref>). Let us also note that, for every i∈{1,…,m}, the resolvent𝖩_𝖠_i is Lipschitzcontinuous with constant η_i=1/(1+δ_i)∈<cit.>. Next, define 𝖳→𝗑↦(𝖳_i 𝗑)_1≤ i≤ m, where, for every i∈{1,…,m},𝖳_i→_i𝗑↦𝖩_𝖠_i𝗑_i+1, with the convention 𝗑_m+1=𝗑_1. Then we derive from (<ref>) that𝖳𝗑= 𝗑. Moreover,(∀ n∈)(∀𝗑∈) 𝖳𝗑- 𝗑^2=∑_i=1^m𝖩_𝖠_i𝗑_i+1- 𝗑_i^2=∑_i=1^m𝖩_𝖠_i𝗑_i+1- 𝖩_𝖠_i𝗑_i+1^2 ≤∑_i=1^mη^2_i𝗑_i+1-𝗑_i+1^2,which shows that (<ref>) is satisfied upon choosing𝖳_n≡𝖳 and, for every i∈{1,…,m}, τ_i,n≡η_i^2. In this scenario, Algorithm <ref> becomes[ for n=0,1,…; ⌊[ for i=1,…,m-1; ⌊[ x_i,n+1=x_i,n+ε_i,nλ_n( 𝖩_𝖠_ix_i+1,n+a_i,n-x_i,n) ].;x_m,n+1=x_m,n+ε_m,nλ_n( 𝖩_𝖠_mx_1,n+a_m,n-x_m,n), ]. ]and Theorem <ref> describes its asymptotic behavior. In the particular case of (<ref>), for f_1 and f_2 strongly convex, (<ref>) withλ_n≡ 1 and no error, reduces to[for n=0,1,…; ⌊[x_1,n+1=x_1,n+ε_1,n( 𝗉𝗋𝗈𝗑_𝖿_1x_2,n-x_1,n); x_2,n+1=x_2,n+ε_2,n( 𝗉𝗋𝗈𝗑_𝖿_2x_1,n-x_2,n). ]. ]In the deterministic setting in which ε_1,n≡ 1 and ε_2,n≡ 1, the resulting sequence (x_2,n)_n∈ is that produced by the alternating proximity operator method of <cit.>, further studied in<cit.>. We consider an m-agent model investigated in <cit.>.For every i∈{1,…,m}, let𝖠_i_i→ 2^_i be a maximally monotone operator modeling some abstract utility of agent i and let 𝖡_i→_i be a coupling operator. It is assumed that the operator 𝖡→𝗑↦(𝖡_i 𝗑)_1≤ i≤ m is β-cocoercive <cit.> for some β∈, thatis,(∀𝗑∈) (∀𝗒∈)𝗑-𝗒𝖡𝗑- 𝖡𝗒≥β𝖡𝗑- 𝖡𝗒^2.The equilibrium problem is to find 𝗑∈such that(∀ i∈{1,…,m}) 0∈𝖠_i𝗑_i+𝖡_i(𝗑_1,…, 𝗑_m).For every i∈{1,…,m}, let us further assume that 𝖠_i is δ_i-strongly monotone for someδ_i∈ or, equivalently, that𝖬_i=𝖠_i-δ_i is monotone.Since 𝖡 is maximally monotone <cit.>, arguing as inExample <ref>, we arrive at the conclusion that𝖠+𝖡 has exactly one zero 𝗑, and that𝗑 is the unique solutionto (<ref>). Letδ=min_1≤ i≤ mδ_i, and(∀ n∈)θ_n∈ [0,δ]andγ_n∈.Set(∀ n∈)𝖢_n→ 2^𝗑↦i=1m(𝖬_i+(δ_i-θ_n))𝗑_i 𝖣_n=𝖡+θ_n 𝖳_n= 𝖩_γ_n𝖢_n∘(-γ_n𝖣_n).Now let n∈. We first observe that(γ_n𝖢_n+ γ_n𝖣_n) =(𝖠+𝖡) ={𝗑} =𝖳_n,and derive from <cit.> that 𝖩_γ_n𝖢_n𝗑↦(𝖩_γ_n𝖬_i/1+γ_n(δ_i-θ_n)(𝗑_i/1+γ_n(δ_i-θ_n)) )_1≤ i≤ m.Hence (<ref>) entails that𝖩_γ_n𝖢_n is Lipschitz continuous with constant1/(1+γ_n(δ-θ_n)).On the other hand, since 𝖡 isβ-cocoercive, there exists a nonexpansive operator 𝖱→ such thatβ𝖡=(+𝖱)/2 <cit.>. We have -γ_n𝖣_n= (1-γ_nθ_n-γ_n/2β) -γ_n/2β𝖱.In turn, a Lipschitz constant of -γ_n𝖣_n is |1-γ_n(θ_n+1/(2β))|+γ_n/(2β), and henceone for 𝖳_n isζ_n=|1-γ_n(θ_n+1/(2β))|+ γ_n/(2β)/1+γ_n(δ-θ_n).Note that ζ_n= 1-γ_nθ_n1+γ_n( δ-θ_n)<1, ifγ_n≤2β1+2βθ_n; γ_n(θ_n+1/β)-11+γ_n(δ-θ_n)<1, if2β1+2βθ_n<γ_n< 2β1+β(2θ_n-δ).Consequently, imposingγ_n<2β1+β(2θ_n-δ)places us in the framework of Problem <ref> with (∀ i∈{1,…,m}) τ_i,n=ζ^2_n. Algorithm <ref> for solving (<ref>), that is,[for n=0,1,…; ⌊[for i=1,…,m; ⌊[ x_i,n+1=x_i,n+ε_i,nλ_n (𝖩_γ_n𝖬_i/1+γ_n(δ_i-θ_n)((1-γ_n θ_n)x_i,n-γ_n𝖡_ix_n/1+γ_n(δ_i-θ_n))+a_i,n-x_i,n), ]. ].;]is then an instance of the block-coordinate forward-backward algorithm of <cit.>. Its convergence properties in the present setting are given in Theorem <ref>.In view of (<ref>), (<ref>) constitutesa special case of (<ref>) and it can also be solved via (<ref>). In Example <ref>, we have exploited the special structure of 𝖡 to obtain tighter coefficients(τ_i,n)_1≤ i≤ m, n∈ in (<ref>).Let 𝗀→ be a convex function which is differentiable with a β^-1-Lipschitzian gradient for some β∈ and, for every i∈{1,…,m}, let 𝖿_i_i→ be a proper lower semicontinuous δ_i-strongly convex function for some δ_i∈.We consider the optimization problem𝗑_1∈_1,…,𝗑_m∈_m∑_i=1^m𝖿_i(𝗑_i)+𝗀 (𝗑_1,…,𝗑_m).Then it results from standard facts <cit.> that this problem is thespecial case of Example <ref> in which𝖡=∇𝗀 and, for every i∈{1,…,m}, 𝖠_i=∂𝖿_i. Now set (∀ i∈{1,…,m}) 𝗁_i=𝖿_i-δ_i·^2/2. Then(<ref>) assumes the form [for n=0,1,…; ⌊[ for i=1,…,m; ⌊[ x_i,n+1=x_i,n+ε_i,nλ_n (_γ_n𝗁_i/1+γ_n(δ_i-θ_n)((1-γ_nθ_n)x_i,n-γ_n ∇_i 𝗀(x_n)/1+γ_n(δ_i-θ_n))+a_i,n-x_i,n), ]. ].;]where ∇_i 𝗀→_i is the ith component of ∇𝗀. In the case of a non block-coordinate implementation, i.e., m=1, a mean-square convergence result for the forward-backward algorithm can be found in <cit.> under different assumptions thanours and, in particular, the requirement that the proximalparameters (γ_n)_n∈ must go to 0.In connection with the linear convergence of (<ref>)deriving from Corollary <ref>, let us note that a similar result was obtained in <cit.> by imposing the restrictions (∀ i∈{1,…,m})_i=^N_i,𝗉_i=1/m, and(∀ n∈)λ_n=1and a_i,n=0.In this specific setting the proximal parameter in <cit.>was chosen differently for each block: it is not allowed to varywith the iteration n as in (<ref>), but it can be chosen differently for each i.In the case when (∀ i ∈{1,…,m}) 𝖿_i=0,more freedom was given to the choice of (𝗉_i)_1≤ i≤ m in <cit.>, but by still activating only one blockat each iteration. Further narrowing the problem to the minimization of asmooth strongly convex function on ^N, a coordinate descent method is proposed in <cit.> which requires, for every i∈{1,…,m}, _i= and allows for multiple coordinates to be randomly updated ateach iteration, as in (<ref>). 99Acke80 F. Acker and M. A. Prestel, Convergence d'un schéma de minimisation alternée, Ann. Fac. Sci. Toulouse V. Sér. Math., vol. 2, pp. 1–9, 1980.Atch17 Y. F. Atchadé, G. Fort, and E. Moulines,On perturbed proximal gradient algorithms,J. Mach. Learn. Res., vol. 18, pp. 1–33, 2017.Sico10 H. Attouch, L. M. Briceño-Arias, and P. L. Combettes, A parallel splitting method for coupled monotone inclusions, SIAM J. Control Optim., vol. 48, pp. 3246–3270, 2010. Bail12 J.-B. Baillon, P. L. Combettes, and R. Cominetti, There is no variational characterization of the cycles in themethod of periodic projections, J. Funct. Anal., vol. 262,pp. 400–408, 2012.Baus16 H. H. Bauschke, J. Y. Bello Cruz, T. T. A. Nghia, H. M. Phan, andX. Wang, Optimal rates of linear convergence of relaxed alternating projections and generalized Douglas-Rachford methods for two subspaces, Numer. Algorithms, vol. 73, pp. 33–76, 2016.Livre1H. H. Bauschke and P. L. Combettes, Convex Analysis and Monotone Operator Theory in HilbertSpaces, 2nd ed. Springer, New York, 2017.Nona05 H. H. Bauschke, P. L. Combettes, and S. Reich, The asymptotic behavior of the composition of two resolvents, Nonlinear Anal., vol. 60, pp. 283–301, 2005.Baus12 H. H. Bauschke, S. M. Moffat, and X. Wang, Firmly nonexpansive mappings and maximally monotone operators: Correspondence and duality, Set-Valued Var. Anal., vol. 20, pp. 131–153, 2012.Botr17 R. I. Boţ and E. R. Csetnek,Convergence rates for forward-backward dynamical systems associatedwith strongly monotone inclusions,J. Math. Anal. Appl., vol. 457, pp. 1135–1152, 2018.Botr15 R. I. Boţ, E. R. Csetnek, A. Heinrich, and C. Hendrich, On the convergence rate improvement of a primal-dual splitting algorithm for solving monotone inclusion problems,Math. Programming, vol. 150, pp. 251–279, 2015.Siop15 P. L. Combettes and J.-C. Pesquet,Stochastic quasi-Fejér block-coordinate fixed point iterationswith random sweeping, SIAM J. Optim., vol. 25, pp. 1221–1248, 2015.Pafa16 P. L. Combettes and J.-C. Pesquet,Stochastic approximations and perturbations in forward-backward splitting for monotone operators, Pure Appl. Funct. Anal., vol. 1, pp. 13–37, 2016.Opti14 P. L. Combettes and B. C. Vũ, Variable metric forward-backward splitting with applicationsto monotone inclusions in duality, Optimization, vol. 63, pp. 1289–1318, 2014.Fort95 R. M. Fortet, Vecteurs, Fonctions et Distributions Aléatoires dans les Espaces de Hilbert. Hermès, Paris, 1995.Ledo91 M. Ledoux and M. Talagrand, Probability in Banach Spaces: Isoperimetry and Processes. Springer, New York, 1991.LoevIIM. Loève, Probability Theory II, 4th ed.Springer, New York, 1978.Mor62bJ. J. Moreau,Fonctions convexes duales et points proximaux dans unespace hilbertien, C. R. Acad. Sci. Paris Sér. A Math., vol. 255, pp. 2897–2899, 1962.Nest12 Yu. Nesterov, Efficiency of coordinate descent methods on huge-scale optimization problems, SIAM J. Optim., vol. 22, pp. 341–362, 2012.Pesq12 J.-C. Pesquet and N. Pustelnik, A parallel inertial proximal optimization method, Pac. J. Optim., vol. 8, pp. 273–305, 2012.Rich14 P. Richtárik and M. Takáč,Iteration complexity of randomized block-coordinate descentmethods for minimizing a composite function, Math. Program., vol. A144, pp. 1–38, 2014.Rich16 P. Richtárik and M. Takáč,On optimal probabilities in stochastic coordinate descent methods, Optim. Lett., vol. 10,pp 1233–1243, 2016.Rock76R. T. Rockafellar,Monotone operators and the proximal point algorithm, SIAM J. Control Optim., vol. 14, pp. 877–898, 1976.Rock09R. T. Rockafellar and R. J. B. Wets,Variational Analysis, 3rd printing. Springer-Verlag, New York, 2009.Rosa16 L. Rosasco, S. Villa, and B. C. Vũ,Stochastic forward-backward splitting method for solving monotone inclusions in Hilbert spaces,J. Optim. Theory Appl., vol. 169, pp. 388–406, 2016.Sch93bL. Schwartz, Analyse III – Calcul Intégral. Hermann, Paris, 1993.Sibo70M. Sibony,Méthodes itératives pour les équations et inéquations auxdérivées partielles non linéaires de type monotone, Calcolo, vol. 7, pp. 65–183, 1970.
http://arxiv.org/abs/1704.08083v2
{ "authors": [ "Patrick L. Combettes", "Jean-Christophe Pesquet" ], "categories": [ "math.OC" ], "primary_category": "math.OC", "published": "20170426125221", "title": "Stochastic Quasi-Fejér Block-Coordinate Fixed Point Iterations With Random Sweeping II: Mean-Square and Linear Convergence" }
Department of Physics, University of Colorado, Boulder, Colorado 80309, USA Center for Theory of Quantum Matter, University of Colorado, Boulder, Colorado 80309, USAFluid dynamics is traditionally thought to apply only to systems near local equilibrium. In this case, the effective theory of fluid dynamics can be constructed as a gradient series. Recent applications of resurgence suggest that this gradient series diverges, but can be Borel-resummed, giving rise to a hydrodynamic attractor solution which is well defined even for large gradients. Arbitrary initial data quickly approaches this attractor via non-hydrodynamic mode decay. This suggests the existence of a new theory of far-from-equilibrium fluid dynamics. In this work, the framework of fluid dynamics far from local equilibrium for conformal system is introduced, and the hydrodynamic attractor solutions for rBRSSS, kinetic theory in the relaxation time approximation, and strongly-coupled N=4 SYM are identified for a system undergoing Bjorken flow.Fluid Dynamics Far From Local Equilibrium Paul Romatschke December 30, 2023 =========================================What is fluid dynamics and what is its regime of applicability? Over the centuries, different answers have been given to this question.The textbook definition of the applicability of fluid dynamics is that the local mean free path should be much smaller than the system size. This criterion originates from the notion that fluid dynamics is the macroscopic limit of some underlying kinetic theory. In kinetic theory, the mean free path has the intuitive interpretation of the typical length a particle can travel before experiencing a collision. If that mean free path length is larger than the system size, particles will not experience collisions before leaving the system, thus invalidating a fluid dynamic description.In relativistic fluid dynamics' modern formulation, the phenomenological ‘mean-free-path’ criterion is replaced by the requirement that gradients around some reference configuration (typically local equilibrium) are small when compared to system temperature. This gives rise to the notion of fluid dynamics as the effective theory of long-wavelength excitations, which can be expressed as a hydrodynamic gradient series. In this framework, the Navier-Stokes equations arise as the unique theory that is defined by the most general energy-momentum tensor that can be built out of hydrodynamic fields and first-order gradients thereof.Thus, even in the modern framework, the requirement of small gradients seems to limit the applicability of fluid dynamics to the near-equilibrium regime.However, there is mounting evidence that (first or second-order) fluid dynamics offers a correct quantitative description of systems which are not close to local equilibrium. For instance, a variety of numerical experiments indicate that fluid dynamics can match exact results even if the gradient corrections (normalized by temperature) are of order unity <cit.>. Experimentally, ultrarelativistic collisions of protons exhibits the same flow features as much larger systems produced in heavy-ion collisions <cit.>. Despite gradients in proton collisions being large, low-order hydrodynamics offers a quantitatively accurate description of experimental flow results <cit.>. This suggests that the mean-free path criterionfor the applicability of fluid dynamics is possibly too strict and should be replaced by the ability to neglect the effect fromnon-hydrodynamic modes <cit.>.One may now wonder if the (unreasonable?) success of low-order fluid dynamics is perhaps caused by a particularly rapid convergence of the hydrodynamic gradient expansion. To test this idea, the gradient series coefficients were calculated for specific microscopic theories to very high order, cf. Refs. <cit.>. Curiously, it was found that the gradient series not only is not rapidly convergent, but is actually a divergent series. However, it seems that this divergent series is Borel-summable in a generalized sense. In a groundbreaking article, Heller and Spaliński demonstrated that the Borel-resummed gradient series leads to the presence of a unique hydrodynamic ‘attractor’ solution <cit.>. Arbitrary initial data in the underlying microscopic theory quickly evolves towards this unique attractor solution. In the limit of small gradients, the attractor reduces to the familiar low-order hydrodynamic gradient series solution.Perhaps more interestingly, while the hydrodynamic gradient series diverges,the (non-analytic) hydrodynamic attractor solution is very well approximated by the low order hydrodynamic gradient series even for moderate gradient sizes, at least for the system studied in Ref. <cit.>. This is typical for solutions which possess an asymptotic series expansion, cf. the case of perturbative QCD at high temperature <cit.>. As argued in Ref. <cit.>, this observation naturally explains the success of low-order hydrodynamics in accurately describing out-of-equilibrium systems where gradients are of order unity.In the present work, I will attempt to generalize the notion of fluid dynamics to systems with conformal symmetry far from local equilibrium,where normalized gradients are not only of order unity, but large. This generalization requires the presence of a non-analytic hydrodynamic attractor solution far from equilibrium. Because arbitrary initial data will typically not fall onto this attractor solution, far-from-equilibrium fluid dynamics will not describe most far-from-equilibrium solutions the microscopic dynamics generates. In this sense, far-from-equilibrium fluid dynamics is no replacement for solving the exact microscopic dynamics for a specific initial condition. However, the reason why far-from-equilibrium evolution does not match the hydrodynamic attractor is that for arbitrary initial data, other, non-hydrodynamic modes are typically excited <cit.>. These non-hydrodynamic modes are specific to the microscopic theory under consideration, but have in common that they decay on time-scales short compared to the typical fluid dynamic time scale (e.g. the mean free path) as long as hydrodynamic modes exist <cit.>. Hence all solutions for arbitrary initial data will eventually merge with the hydrodynamic attractor after the non-hydrodynamic modes have decayed.Near Equilibrium Fluid DynamicsLet us consider a conformal quantum system in four dimensional Minkowski space-time andassume that the expectation value of the energy-momentum tensor ⟨ T^μν⟩ in equilibrium can be calculated. In this case, ⟨ T^μν⟩ possesses a time-like eigenvector u^μ, normalized to u^μ u_μ=-1,with an associated eigenvalue ϵ (I will beusing the mostly plus convention for the metric tensor g_μν). It is then straightforward to show that ⟨ T^μν⟩ can be decomposed as the ideal fluid energy-momentum tensor T^μν_(0)=(ϵ+P)u^μ u^ν+P g^μν, where P is the equilibrium pressure for this four-dimensional conformal theory. Since for a conformal system in flat space T^μ_μ=0, the equilibrium equation of state is P(ϵ)=ϵ/3.Near-equilibrium corrections to the ideal fluid energy-momentum tensor can systematically be derived by considering all possible independent symmetric rank two tensors with one, two, three, etc. gradients consistent with conformal symmetry <cit.>. The first-order correction is given byT^μν_(1)=-ησ^μν ,where η is the shear viscosity coefficient and σ^μν=2 ∇_⊥^(μ u^ν)-2/3Δ^μν∇_λ u^λ where ∇_⊥^μ=Δ^μν∇_ν and Δ^μν=(g^μν+u^μ u^ν). As was the case in equilibrium, the energy density ϵ and flow vector u^μ can still be defined as the time-like eigenvalue and eigenvector of the microscopic energy-momentum tensor ⟨ T^μν⟩ as long as a local rest frame exists <cit.>. The conservation of T^μν_(0)+T^μν_(1) constitutes a near-equilibrium fluid dynamic theory, since T^μν_(1) should be a small correction to T^μν_(0). For higher order corrections, counting the number of independent structures built out of contractions of ∇_μ, one finds that there are at least (n-2)! terms contributing to T^μν_(n). Assuming the coefficients multiplying these terms do not decrease too fast, this implies that T^μν_(n) grows as n! for fixed gradient strength ∇_μ. Hence one can expect the hydrodynamic gradient series to diverge for any non-vanishing gradient strength. Borel Resummation To elucidate the Borel resummation of the divergent hydrodynamic series it is useful to consider concrete examples. Specifically, let us consider conformal systems undergoing Bjorken flow <cit.>. Thus, let us assume the system to be homogeneous and isotropic in the coordinates τ=√(t^2-z^2) and ξ= arctanhz/t, such that the energy density will only depend on proper time τ.As a warm-up, let us review the mock microscopic theory of rBRSSS [The acronym BRSSS refers to the conformal hydrodynamic gradient series complete to second order <cit.> while rBRSSS denotes the resummed version of BRSSS<cit.>. This resummation is similar to the one in Muller-Israel-Stewart theory <cit.>, but unrelated to the Borel resummation discussed in this work. It is a mock microscopic theory in the sense that results for e.g. the energy density are well behaved even at large gradient strength, yet rBRSSS does not correspond to any real microscopic dynamics.] (a modern variant of <cit.>), where the evolution equation for the energy density is given by τ∂_τlnϵ=-4/3+Φ/ϵ,τ_π∂_τΦ = 4 η/3 τ-Φ-4/3τ_π/τΦ-λ_1/2 η^2Φ^2 where Φ is an auxiliary field <cit.>. Introducing T(τ)∝ϵ^1/4(τ), which customarily has the interpretation of an out-of-equilibrium temperature, it is convenient to consider the dimensionless combinations C_η=3η T/4ϵ, C_π=τ_π T and C_λ=3λ_1 T^2/4ϵ C_η C_π for the three parametersof rBRSSS. These parametersare time-independent because of conformal symmetry. Defining the normalized gradient strength from Eq. (<ref>) as 3 σ^μνσ_μν/8 T ∇_λ u^λ=1/τ T, the rBRSSS equations of motion can be expanded for small gradients (τ T)^-1≪ 1, findingτ∂_τlnϵ = -4/3+16 C_η/9 τ T+32 C_η C_π(1-C_λ)/27 τ^2 T^2+… ,corresponding to the zeroth, first and second-order hydrodynamic gradient series approximation, respectively. In Ref. <cit.>, this gradient series has been extended to τ∂_τlnϵ=∑_n=0^200α_n (τ T)^-n, indeed exhibiting factorial growth for thecoefficients α_n. However, the Borel transform B[τ∂_τlnϵ](τ T)≡∑_n=0^200α_n/n!(τ T)^-n exists within a finite radius of convergence around (τ T)^-1=0. B[τ∂_τlnϵ] may be analytically continued to the whole τ T complex plane by considering the symmetric Padé approximant to B, finding a dense series of poles with the pole closest to the origin located at τ T=z_0^-1, where z_0=3/2 C_π<cit.>. It is nevertheless possible to define a generalized Borel transformT[τ∂_τlnϵ](τ T)≡∫_ C dz e^-z B[τ∂_τlnϵ](τ T z) where the contour C starts at the origin and ends at z=∞. The ambiguity in the choice of contour in the presence of the singularities of B implies an ambiguity in T, with the pole closest to the origin giving a contribution ∝ e^-z_0 τ T to T. This contribution is non-analytic in 1/τ T and thus responsible for the divergence in the hydrodynamic gradient series, and can be attributed to the presence of a non-hydrodynamic mode. Note that it precisely matches the structure e^-∫dτ/τ_π expected from the known non-hydrodynamic mode in rBRSSS <cit.> when using τ_π^-1=T/C_π∝τ^1/3/C_π. The ambiguity in T can be resolved by promoting the gradient series to a transseries, e.g. τ∂_τlnϵ=∑_n,m=0^∞ c^m Ω^m(τ T)α_nm(τ T)^-n, with Ω(τ T)=(τ T)^γ e^-z_0τ T, and finding the constants c,γ such that the ambiguity in the Borel transform of the transseries part with m=m_0 is exactly canceled by Ω^m_0+1(τ T) for the part with m=m_0+1. This program has successfully been performed for rBRSSS in Ref. <cit.>. The final result for the Borel transform of τ∂_τlnϵ can be written in the form τ∂_τlnϵ = (τ∂_τlnϵ)_ att+(τ∂_τlnϵ)_ non-hydro, consisting of anon-analytic “attractor” solution defined for arbitrary τ T to which the non-hydrodynamic part decays to on a timescale τ T≃ z_0^-1.Note that obtaining non-analytic solutions from divergent perturbative series' has recently generated considerable interest under the name of “resurgence” <cit.>.Finding Hydrodynamic AttractorsIdentifying the hydrodynamic attractor solution from the Borel resummation program of the hydrodynamic gradient series is possible, but somewhat tedious. Fortunately, it is possible to obtain the same attractor solution more directly from the equations of motion via the analogue of a slow-roll approximation, cf. Refs. <cit.> (see Supplemental Material for details). In Fig. <ref>, results from solving the rBRSSS equations of motions for a range of initial conditions (“numerical”) are as shown together with zeroth, first and second order hydrodynamic gradient series results from Eq. (<ref>). It can be observed that the numerical solutions converge to the hydrodynamic results for moderate gradient strength. One also observes from Fig. <ref> that the numerical results trend to the unique attractor solution even before matching the gradient series results. This attractor solution is nothing else but the result of the Borel transformation of the divergent transseries as reported in Ref. <cit.>. Hydrodynamic Attractor in Kinetic TheoryIt is tempting to look for hydrodynamic attractors in other microscopic theories, such as kinetic theory in the relaxation time approximation. This theory is defined by a single particle distribution function f(t, x, p) obeyingp^μ∂_μ f-Γ^λ_μν p^μ p^ν∂/∂ p^λ f=-f-f^ eq/τ_π ,where here Γ^λ_μν are the Christoffel symbols associated with the Bjorken flow geometry and the equilibrium distribution function may be taken to be f^ eq=e^p^μ u_μ/T. Here u^μ is again the time-like eigenvector of ⟨ T^μν⟩=∫d^3p/(2π)^3p^μ p^ν/p f(x,p) and T is the non-equilibrium temperature defined from the time-like eigenvalue of ⟨ T^μν⟩, which for a single massless Boltzmann particle is T=(π^2 ϵ/6)^1/4.Note that for a conformal system one can again write τ_π = C_π T^-1 with C_π a constant. Solving Eq. (<ref>) numerically, representative results for τ∂_τlnϵ are shown in Fig. <ref> (note that τ∂_τlnϵ≤ -1 because the effective longitudinal pressure P_L=ϵ(1+τ∂_τlnϵ) in kinetic theory can never be negative for f>0). One observes the same basic structure as in rBRSSS, indicating the presence of a hydrodynamic attractor at early times that arbitrary initial conditions approach via non-hydrodynamic mode decay. (Note that for kinetic theory, the non-hydrodynamic mode is a branch cut giving rise to a decay of the form e^-∫dτ/τ_π <cit.>).The attractor solution may be found by finding the initial condition corresponding to a slow-roll approximation at early times and using the numerical scheme to follow the attractor (see Supplemental Material including Refs. <cit.> for details).I find that the kinetic attractor can be approximated by.∂lnϵ/∂lnτ|^ att_ kinetic≃ - C_π^2+0.744 C_π (τ T) + 0.21 (τ T)^2/C_π^2+0.6 C_π (τ T) + 0.1575(τ T)^2 ,and it coincides with the hydrodynamic solution (<ref>) for late times when using the known results C_π=5 C_η, C_λ=5/7 for kinetic theory<cit.>. Hydrodynamic Attractor in N=4 SYM Bjorken-flow may also easily be set up in strongly coupled N=4 SYM in the large number of color limit through the AdS/CFT correspondence. Einstein equations in asymptotic five-dimensional AdS space-time R_ab-1/2g_ab R-6 g_ab=0may be solved numerically by the method pioneered by Chesler and Yaffe <cit.>, and theN=4 SYM energy-momentum tensor expectation value ⟨ T^μν⟩ at the conformal boundary can be extracted.Using the numerical scheme described in Ref. <cit.> (see Supplemental Material for details), results for τ∂_τlnϵ are shown in Fig. <ref> compared to the hydrodynamic solutions (<ref>) with C_η=1/4π,C_π=2-ln 2/2 π,C_λ=1/2-ln 2 for N=4 SYM<cit.>. Again, the numerical solutions suggest the presence of a hydrodynamic attractor at early times, which is slightly more difficult to see than in the cases of rBRSSS and kinetic theory because the non-hydrodynamic modes for N=4 SYM are known to have oscillatory behavior (non-vanishing real parts of the black hole quasinormal modes <cit.>). Nevertheless, one can discern a preferred attractor candidate without any apparent oscillatory behavior starting at lim_τ→ 0∂lnϵ/∂lnτ→ -1 to which all other initial conditions decay to [The behavior ϵ∝1/τ for τ→ 0 seems to contradict the results from Ref. <cit.>, where it was found that a regular bulk geometry excluded singular behavior of ϵ for τ→ 0 if ϵ possesses a power-series expansion around τ=0. It is possible that the numerical solutions presented here are not sensitive to bulk singularities at τ=0 because the numerics are started at τ>0. Another possibility is that the attractor solution does admit a simple power series expansion for ϵ around τ=0. Future work is needed to bring clarity to this issue. ]. I do not have any analytic understanding of the nature of this AdS/CFT attractor solution, but it is curious to note that it is numerically close to (but clearly different from) the kinetic theory attractor (<ref>) with C_π=5/4π.Effective ViscosityIt is possible to interpret the attractor solutions in terms of an effective viscosity coefficient coefficient η_B by writing down a generalized hydrodynamic energy-momentum tensorT^μν_ hydro=(ϵ+P_B)u^μ u^ν+P_B g^μν-η_B σ^μνwhere for a conformal system P_B=ϵ/3 and η_B/s=η_B/s(∇_μ) depends on the local gradient-strength. In the above expressions the subscript 'B' was chosen to indicate Borel-resummed out-of-equilibrium quantities.Energy-momentum conservation u_ν∇_μ T^μν=0 leads to ∂_τlnϵ = -4/3+16 C_η/9 τ Tη_B/η which can be matched to the hydrodynamic attractor solution, e.g. Eq. (<ref>) to define η_B as a function of gradient strength. For the hydrodynamic attractor solutions discussed above, one finds results shown in Fig. <ref>. For small gradients, one recovers η_B/s=η/s, as expected. However, η_B eventually tends to zero for far-from-equilibrium systems.This finding implies that the effective viscosity η_B encountered by an out-of-equilibrium system can be significantly smaller than the equilibrium viscosity η calculated from e.g. Kubo relations. Note that this definition of η_B is qualitatively similar to Ref. <cit.>, but differs by containing non-linear, but no non-hydrodynamic mode contributions. Discussion and ConclusionsIn this work, a generalization of fluid dynamics to systems far from local equilibrium was discussed. This generalization rests on the existence of special attractors which become the well-known hydrodynamic solutions once the system comes close to equilibrium. These attractors were explicitly constructed for conformal Bjorken flow for three microscopic theories: rBRSSS (following earlier work in Ref. <cit.>), and for the first time for kinetic theory and strongly coupled N=4 SYM. For all three systems, it was shown that for arbitrary initial data, attractor solutions are approached via non-hydrodynamic mode decay, demonstrating that the attractor concept is not limited to rBRSSS studied in Ref. <cit.> but applies to a broader class of phenomenologically relevant theories. For conformal systems, attractors can be characterized by an equilibrium equation of state and non-equilibrium viscosity η_B. Far-from-equilibrium fluid dynamics, defined through Eq. (<ref>), constitutes a self-contained set of equations which may prove useful in the description of a broad class of out-of-equilibrium systems. Also, the existence of the attractor solutions for kinetic theory and AdS/CFT suggests the possibility of far-from-equilibrium attractors for the particle distribution function and space-time geometry, respectively.Many questions remain. Do all microscopic theories possess far-from-equilibrium attractor solutions? Are attractor functions η_B universal for all dynamics of a given microscopic theory? Do non-conformal systems also possess attractors which are characterized by an equilibrium equation of state?Answering these questions will be the subject of future work. § ACKNOWLEDGMENTSThis work was supported in part by the Department of Energy, DOE award No DE-SC0008132. I would like to thank M. Heller, M. Spaliński and M. Strickland for fruitful discussions, the organizers of the YITP workshop for Holography, String Theory and Quantum Black Holes for their hospitality and especially W. Zajc for carefully reading and correcting the manuscript and making many useful suggestions.§ SUPPLEMENTAL MATERIAL In this supplemental material, details on obtaining the hydrodynamic attractor solutions for rBRSSS, kinetic theory and strongly coupled N=4 SYM are given. For rBRSSS, the equations of motion in the case of conformal Bjorken flow τ∂_τlnϵ=-4/3+Φ/ϵ, τ_π∂_τΦ = 4 η/3 τ-Φ-4/3τ_π/τΦ-λ_1/2 η^2Φ^2 may be decoupled as <cit.>C_πτ T f(τ T) f^' (τ T)+4 C_π C_x(τ) f^2(τ T) +(τ T-16 C_π/3C_x(τ) )f(τ T) -4 C_η/9+16 C_π/9C_x(τ) -2 τ T/3=0 ,where f(τ T)≡ 1+τ/4∂_τlnϵ, and C_x(τ)≡(1+3 C_λτ T/8 C_η) have been introduced for convenience. Neglecting f'(τ T) in Eq. (<ref>) leads to the first approximate attractor solutionf^(1) ≃2/3-τ T/8 C_π C_x(τ)+√(64 C_η C_π C_x(τ)+9 (τ T)^2)/24C_π C_x(τ) .This solution may be consistently improved by iteration. For instance, posing f^(2)=f^(1)+δ f and neglecting δ f^' in Eq. (<ref>) leads to a second approximation f^(2) for the attractor and so on. In practice, I find the process to converge rapidly such that f^(3) is typically a sufficiently accurate approximation to the exact attractor solution for most applications. For kinetic theory, one would like to find attractor solutions for the energy density. Following seminal work by Baym, Florkowski, Ryblewski and Strickland<cit.>, it is possible to write down an integral equation for ϵ(τ) from the kinetic equations, which takes the form ϵ(τ)=Λ_0^4 D(τ,τ_0)R(ξ(τ)) + ∫_τ_0^τdτ^'/τ_π D(τ,τ^') ϵ(τ^')R(τ^2/τ^' 2-1) ,where D(τ_2,τ_1)=e^-∫_τ_1^τ_2 dτ^'τ_π^-1(τ^'), ξ(τ)=(1+ξ_0)τ^2/τ_0^2-1, R(z)=1/2(1/1+z+arctan(√(z))/√(z)) and Λ_0 is a typical energy scale. Initial conditions are characterized by a value of ξ_0 at τ=τ_0. Numerical solutions to Eq. (<ref>) may be obtained by inserting a trial solution ϵ^(0)(τ) to evaluate the rhs of Eq. (<ref>), thus finding an improved solution ϵ^(1)(τ) from the lhs of Eq. (<ref>), and converging to the exact solution by iterating this process. It is possible to identify points close tothe attractor solution by calculating the equivalent of f^'(τ T) from (<ref>) as.f^'|_τ=τ_0∝.ϵ∂_τϵ +τϵ∂_τ^2ϵ-τ(∂_τϵ)^2/ϵ^2|_τ=τ_0 ,which does not involve any integrals because the rhs is being evaluated at τ=τ_0 where e.g. ϵ(τ_0)=R(ξ_0). Solving .f^'|_τ=τ_0=0 to obtain a value of ξ_0 at τ=τ_0 implies that ϵ(τ_0,ξ_0(τ_0)) is close to the attractor. One finds for instance ξ_0(τ_0=0.1)≃ 24.5 for C_π=0.4. The attractor may then be found from a numerical solution of (<ref>) when using ξ_0(τ_0) as initial condition at early time τ_0≪ 1.Let me now give details on how to obtain the attractor solution in strongly coupled N=4 SYM. Using an ansatz for the five-dimensional line-elementds^2=2 dr dτ-A dτ^2+Σ^2 e^Bdx_⊥^2+Σ^2 e^-2 Bdξ^2with A=A(τ,r),B=B(τ,r),Σ=Σ(τ,r) and r the coordinate in the fifth dimension, it is possible to impose Bjorken-flow at the 4-dimensional Minkowski boundary located at r→∞ through A→ r^2, B→ -2/3ln1+ rτ/r, Σ^3→ r^2 (1+r τ). Modifications of these relations at finite r at fixed time correspond to different initial conditions for the time-evolution of ⟨ T^μν⟩ on the boundary. In particular, the energy density ϵ(τ)∝-a_4 is given through the near-boundary series expansion coefficient a_4(τ) from A(τ,r→∞)≃ r^2(1+∑_n=1^∞ a_n r^-n). Initial conditions are specified similar to those in Ref. <cit.>, by choosing Σ^3=r^2(1+r τ)+r^2 s_2+r s_3 +s_4/r^4+c̅ with c̅ a constant ands_2(τ)=4 ϵ(τ)+3 τϵ^'(τ)/20 , s_3(τ)=13 ϵ^'(τ)+5 τϵ^''(τ)/40 ,s_4(τ)=9 ϵ^'(τ)+135 τϵ^''(τ)+35 τ^2 ϵ^'''(τ)/560 τ at τ=τ_0. Using the numerical scheme described in Ref. <cit.> and choosing specific initial conditions ϵ(τ)∝τ^α with constants α,c̅, solving Einstein equations with (<ref>) gives numerical results for τ∂_τlnϵ. Similar to kinetic theory, one can scan initial data parametrized by values of c̅,α at τ=τ_0 for numerical results exhibiting weak transients (small f^' for all times). One such initial condition is α≃-1,c̅≃ 750 at τ_0=0.25, suggesting that it liesclose to the hydrodynamic attractor. Once a point close to the attractor has been identified, the attractor solution at later times is again obtained numericallyusing the numerical scheme from Ref. <cit.>. For convenience, the relevant numerical material used to obtain the attractors in rBRSSS, kinetic theory and AdS/CFT has been made publicly available <cit.>.
http://arxiv.org/abs/1704.08699v3
{ "authors": [ "Paul Romatschke" ], "categories": [ "hep-th", "nucl-th", "physics.flu-dyn" ], "primary_category": "hep-th", "published": "20170427180008", "title": "Fluid Dynamics far from Local Equilibrium" }
[email protected] INFN, Sezione di Firenze, and Department of Physics and Astronomy, University of Florence, Via G. Sansone 1, 50019 Sesto Fiorentino, Italy [email protected] INFN, Sezione di Firenze, and Department of Physics and Astronomy, University of Florence, Via G. Sansone 1, 50019 Sesto Fiorentino, Italy [email protected] INFN, Sezione di Firenze, and Department of Physics and Astronomy, University of Florence, Via G. Sansone 1, 50019 Sesto Fiorentino, Italy The model proposed by Georgi and Machacek enables the Higgs sector to involve isospin triplet scalar fields while retaining a custodial SU(2)_Vsymmetry in the potential and thus ensuring the electroweak ρ parameter to be one at tree level.This custodial symmetry, however, is explicitly broken by loop effects of the U(1)_Y hypercharge gauge interaction.In order to make the model consistent at high energies,we construct the most general form of the Higgs potential without the custodial symmetry, and then we derive the one-loop β-functions for all the model parameters. Assuming the δ_i quantities describing the custodial symmetry breaking to be zero at low energy,we find that |δ_i| are typically smaller than the magnitude of the U(1)_Y gauge coupling and the other running parameters in the potential also at high energy without spoiling perturbativity and vacuum stability.We also clarify that the mass degeneracy among the SU(2)_V 5-plet and 3-plet Higgs bosons is smoothly broken by ∼ 0.1% corrections.These results show that the amount of the custodial symmetry breaking is well kept under control up to energies close to the theory cutoff.Effects of custodial symmetry breakingin the Georgi-Machacek model at high energies Kei Yagyu December 30, 2023 =====================================================================================§ INTRODUCTION The discovered scalar particle with a mass of 125 GeVat the LHC Run-I experiment shows properties which are consistent with those of the Standard Model (SM) Higgs boson <cit.>.This experimental fact suggests that the Higgs sector should be constructed by at least one isospin doublet scalar field.Due to the still poor experimental accuracy, there are various possibilities for extensions of the Higgs sector, which are predicted in many new physics scenarios,from the minimal form assumed in the SM.Therefore, the open question is then “what is the true shape of the Higgs sector?”One of the most important hints to narrow down the structure of the Higgs sectorcomes from the electroweak ρ parameter,which is defined by the ratio of the strength of the charged electroweak current to the neutral one at zero momentum transfer.It is well known that its experimental value is quite close to unity, and in factthe global fit analysis givesρ^exp = 1.00037± 0.00023 <cit.>.On the other hand, the tree level ρ parameter can be expressed by the ratio of the weak gauge boson masses in an arbitrary Higgs sector, whichis a sum of contributions from the scalar multiplets φ_i with hypercharge Y_i, isospin T_i and Vacuum Expectation Value (VEV) v_i <cit.>:ρ^tree = m_W^2/m_Z^2cos^2θ_W = ∑_j |v_j|^2[T_j(T_j+1) - Y_j^2]/2∑_i|v_i|^2Y_i^2,where θ_W is the weak mixing angle.Requiring that, in Eq. (<ref>), the contribution to the numerator equals that to the denominator for a fixed multiplet φ_i, we obtain: T_i = 1/2(√(1+12Y_i^2) - 1).The combinations of T_i and Y_i satisfying the above equation are (T_i,Y_i) = (0,0), (1/2,1/2) and (3,2)[The next possibility is(T_i,Y_i)= (25/2,15/2), but the introduction of such scalar multiplet breaks the perturbative unitarity due to too largegauge couplings for component scalar fields <cit.>. ].Therefore, the introduction of the scalar multiplets with the above assignments does not change the value of ρ^tree from 1, regardless of the value of their VEVs. On the contrary, if we introduce scalar multiplets with T_i ≥ 1 not satisfying Eq. (<ref>),ρ^tree can be different from 1.In such a case, there are two ways to avoid the constraint from ρ^exp, namely,(i) tuning the exotic VEVs[Here, the exotic VEV means that of a scalar multiplet not satisfying Eq. (<ref>). ] to be quite small,or (ii) taking an alignment among the exotic VEVs so as to have a custodial SU(2)_V symmetricpotential.The former way is evident, since the contribution to the deviation in ρ^tree from unity is proportional to the squared VEVs as seen in Eq. (<ref>).The latter way gives phenomenologically interesting consequences due to non-negligible exotic VEVs.One of the most characteristic consequences is seen in the SM-like Higgs boson (h) couplings to the weak gauge bosons hVV (V=W,Z), which can belarger than the SM prediction <cit.>. Such phenomena cannot be realized in non-minimal Higgs sectors constructed only by singlet and/or doublet scalar fields. The model by Georgi and Machacek <cit.> (hereafter, simply called GM model), whose Higgs sector is composed of one iso-doublet with Y=1/2 andtwo iso-triplets with Y=1 and Y=0, is the simplest[This mechanism (ii) can be generalized for models with scalar multipletswith T_i > 1 as discussed in Ref. <cit.>.] concrete realization which satisfies ρ^tree=1 by the requirement (ii) explained above. Basic phenomenological properties of the Higgs bosons in the GM model, e.g., decays and productionshave been discussed in Refs. <cit.>.After the discovery of the 125 GeV Higgs boson,the collider phenomenology of the GM model has been discussed in Refs. <cit.> at the LHC and inRef. <cit.> at future e^+e^- colliders. In the GM model, the two triplet fields can be packaged as an SU(2)_L× SU(2)_R bi-triplet, and thedoublet Higgs field forms a bi-doublet by itself.As a result, the Higgs potential is invariant under the global SU(2)_L× SU(2)_R symmetry.If we take the VEV of the bi-triplet field to be proportional to the 3× 3 unit matrix,which corresponds to taking the two triplet VEVs to be the same,the SU(2)_L× SU(2)_R symmetry breaks down to the custodial SU(2)_V symmetry.However, it is known that this custodial SU(2)_V symmetry is broken at quantum level due to the U(1)_Y hypercharge gauge boson loop effect <cit.>.In this paper, we quantitatively investigate how this custodial SU(2)_V symmetry is broken at high energies by solving the one-looprenormalization group equations (RGEs) for scalar quartic couplings.We will show that in order to have consistent β-functions, we need to start from the most general form of the Higgs potential invariantunder the SU(2)_L× U(1)_Y gauge symmetry.We then numerically evaluate all the running coupling constants with the initial condition that all the SU(2)_V-breaking parameters vanish at low energy. We findthat the amount of the custodial symmetry breaking is well kept under control, thus making the custodial symmetric scenario also accessible at high energies. This paper is organized as follows.In Sec. <ref>, we present the most general form of the Higgs potential in the GM model.We then discuss the relation between the general form and the custodial symmetric one, and definethe limit to recover the latter at tree level.In Sec. <ref>, we clarify the inconsistency in the derivation of the β-functions starting from the custodial symmetric form of the potential.In Sec. <ref>, we first derive the allowed region in the parameter space by bounds from triviality and vacuum stability as a function of the cutoff scale.We then calculate the magnitude of parameters describing the custodial symmetry breaking at high energies.We also show the prediction of the mass spectrum for the Higgs bosons at the TeV scale.Conclusions are givenin Sec. <ref>.In App. <ref>, we list some useful relations between the SU(2)_L× SU(2)_R bi-doublet and bi-triplet form of the scalar fields andthe usual SU(2)_L doublet and triplet ones.In App. <ref>, the mass formulae for all the scalar bosons are given in the general case (but assuming the two triplet VEVs to be the same) and in the custodial symmetric case.In App. <ref>, the analytic expressions for the one-loop β-functions for all the parameters of the GM model are presented.§ THE MOST GENERAL POTENTIAL FOR THE GM MODELThe scalar sector of the GM model is composed of the complex isospin doublet ϕ with Y=1/2,the complex triplet χ with Y=1and the real triplet ξ with Y=0 fields.These fields can be expressed byϕ = ( [ ϕ^+; ϕ^0 ]), χ = ( [χ^+/√(2) -χ^++; χ^0 -χ^+/√(2) ]), ξ = ( [ξ^0/√(2)-ξ^+;-ξ^- -ξ^0/√(2) ]),where the neutral components are parameterized asϕ^0 =1/√(2)(ϕ_r+v_ϕ +iϕ_i),χ^0 =1/√(2)(χ_r+iχ_i^0)+v_χ,ξ^0 = ξ_r+v_ξ,with v_ϕ, v_χ and v_ξ being the VEVs for ϕ^0, χ^0 and ξ^0, respectively. The most general form of the Higgs potential, invariant under the SU(2)_L× U(1)_Y gauge symmetry, is given byV(ϕ,χ,ξ) =m_ϕ^2(ϕ^†ϕ)+m_χ^2tr(χ^†χ)+m_ξ^2tr(ξ^2)+μ_1ϕ^†ξϕ +μ_2 [ϕ^T(iτ_2) χ^†ϕ+h.c.] +μ_3tr(χ^†χξ)+λ (ϕ^†ϕ)^2+ρ_1[tr(χ^†χ)]^2+ρ_2tr(χ^†χχ^†χ) +ρ_3tr(ξ^4) +ρ_4 tr(χ^†χ)tr(ξ^2) +ρ_5tr(χ^†ξ)tr(ξχ)+σ_1tr(χ^†χ)ϕ^†ϕ+σ_2 ϕ^†χχ^†ϕ +σ_3tr(ξ^2)ϕ^†ϕ+σ_4 (ϕ^†χξϕ^c + h.c.),where ϕ^c=iτ_2ϕ^*.Although μ_2 and σ_4 can be complex, we assume them to be real for simplicity. In this CP-conserving case, the potential is described by 16 independent real parameters.Conventionally, the model with the potential given in Eq. (<ref>) has not been referred to as the GM model.Rather, the GM model has been known as the case where the potential has a global SU(2)_L× SU(2)_R symmetry.In this paper, we will regard the model with Eq. (<ref>) as the generalized GM model. Instead of using the scalar fields given in Eq. (<ref>), let us write the potential with the global SU(2)_L× SU(2)_R symmetry in terms ofthe SU(2)_L× SU(2)_R bi-doublet Φ and the bi-triplet Δ scalar fields: Φ=( [ ϕ^0*ϕ^+; -ϕ^-ϕ^0 ]),Δ=( [ χ^0*ξ^+ χ^++; -χ^-ξ^0χ^+;χ^– -ξ^-χ^0 ]).It takes the following form:V(Φ,Δ) =m_Φ^2tr(Φ^†Φ) + m_Δ^2tr(Δ^†Δ)+λ_1tr(Φ^†Φ)^2 +λ_2[tr(Δ^†Δ)]^2 +λ_3tr[(Δ^†Δ)^2] +λ_4tr(Φ^†Φ)tr(Δ^†Δ)+λ_5tr(Φ^†τ^a/2Φτ^b/2) tr(Δ^† t^aΔ t^b)+μ̅_1tr(Φ^†τ^a/2Φτ^b/2)(P^†Δ P)^ab +μ̅_2tr(Δ^† t^aΔ t^b)(P^†Δ P)^ab,where τ^a and t^a (a=1–3) are the 2× 2 and 3× 3 matrix representations of the SU(2) generators, respectively.The matrix P is defined as P=( [ -1/√(2)i/√(2) 0; 0 0 1;1/√(2)i/√(2) 0 ]).The potential given in Eq. (<ref>) is described by 9 independent terms[The custodial symmetric potential does not contain any CP-violating parameters.].Taking the vacuum alignment configuration, i.e. v_Δ^≡ v_χ = v_ξ, the SU(2)_L× SU(2)_R symmetry is spontaneously broken down to thecustodial SU(2)_V symmetry, and the electroweak ρ parameter is predicted to be unity at tree level. By using the relations presented in App. A,we find the following correspondence between the parameters defined in Eq. (<ref>) and those defined in Eq. (<ref>):m_ϕ^2=2m_Φ^2,  m_χ^2=2m_Δ^2,  m_ξ^2= m_Δ^2, μ_1=-μ̅_1/√(2), μ_2=-μ̅_1/2, μ_3=6√(2)μ̅_2,λ=4λ_1, ρ_1=4λ_2+6λ_3, ρ_2=-4λ_3, ρ_3=2(λ_2+λ_3), ρ_4=4λ_2, ρ_5=4λ_3,σ_1 =4λ_4-λ_5, σ_2 =2λ_5, σ_3 =2λ_4, σ_4 =√(2)λ_5.From the above equations, we can express 7 out of the 16 parameters of the potential in Eq. (<ref>) (let us choose m_ξ^2, μ_2, ρ_3,4,5 and σ_4,5)in terms of the others: m_ξ^2=1/2m_χ^2,μ_2=1/√(2)μ_1,ρ_3 = 1/2ρ_1+1/4ρ_2,ρ_4=ρ_1+3/2ρ_2,ρ_5=-ρ_2, σ_3 = 1/2σ_1+1/4σ_2,σ_4 = 1/√(2)σ_2.It is convenient to describe the effect of the custodial symmetry breaking in terms of the following quantities δ_i :δ_1 ≡ m_ξ^2 -m_χ^2/2, δ_2 ≡μ_2 -μ_1/√(2), δ_3 ≡ρ_3 -ρ_1/2-ρ_2/4, δ_4≡ρ_4 -ρ_1-3/2ρ_2,δ_5≡ρ_5 +ρ_2,  δ_6 ≡σ_3 -σ_1/2-σ_2/4, δ_7 ≡σ_4 -σ_2/√(2).We then define the custodial symmetric limit by δ_i → 0, wherethe 16 independent parameters of the general potential are consistently reduced to 9. The mass formulae for all the physical Higgs bosons are presented in App. <ref> for the general case given in Eq. (<ref>) with the two triplet VEVs v_χ and v_ξ to be the same.This relation v_χ = v_ξ is weaklybroken at the TeV scale as we will show in Sec. <ref> as long as we takeδ_i → 0 at low energy.In App. <ref>, we also derive the mass formulae in the custodial symmetric case, in whichall the physical Higgs boson states are classified into theSU(2)_V 5-plet (H_5^±±,H_5^±,H_5^0), 3-plet (H_3^±,H_3^0) and two singlets (H and h), and the masses of the Higgs boson belonging tothe same SU(2)_V multiplet are degenerate.Thus, there are only 4 independent masses for the Higgs bosons, i.e. the mass of the 5-plet (m_H_5^), that of the 3-plet (m_H_3^),and those of the two singlets m_H^ and m_h.We will identify h to be the discovered Higgs boson at the LHC with a mass of 125 GeV, i.e., m_h = 125 GeV. Finally, let us discuss the vacuum stability condition, namely the requirement that the potential does not fall down into a negative (infinite) value at any direction of the scalar field space.In Ref. <cit.>,the vacuum stability condition has been derived in the custodial symmetric case.In the general GM model, there are 5 more independent quartic couplings. The necessary condition to guarantee the vacuum stability is here derived by assuming two non-vanishing complex fields at once.Taking into account all the directions, we obtain the following inequalities:λ≥ 0,  ρ_3 ≥ 0,  ρ_1 + ρ_2 ≥ 0,  ρ_1 + ρ_2/2≥ 0, ρ_4 + ρ_5/2 +√(2ρ_3(ρ_1+ρ_2))≥ 0, ρ_4+√(2ρ_3(ρ_1+ρ_2))≥ 0, ρ_4 + 2√(ρ_3(2ρ_1+ρ_2))≥ 0, ρ_4 + ρ_5 + 2√(ρ_3(2ρ_1+ρ_2))≥ 0, σ_1 + 2√(λ(ρ_1+ρ_2))≥ 0, σ_1 + σ_2 + 2√(λ(ρ_1+ρ_2))≥ 0, σ_1 + σ_2/2 + √(2λ(2ρ_1+ρ_2))≥ 0, σ_3 +√(2λρ_3)≥ 0. Before closing this section, we briefly review the other parts of the Lagrangian related to the Higgs fields.The kinetic Lagrangian is given by ℒ_kin =1/2tr(D_μΦ)^† (D^μΦ) +1/2tr(D_μΔ)^† (D^μΔ),where the covariant derivatives are expressed as D_μΦ =∂_μΦ -ig_2τ^a/2W_μ^aΦ + ig_1B_μΦτ^3/2,D_μΔ =∂_μΔ -ig_2t^aW_μ^aΔ + ig_1B_μΔ t^3.Eq. (<ref>) can also be written in terms of the ϕ, χ and ξ fields, as:ℒ_kin=|D_μϕ|^2+tr[(D_μχ)^†(D^μχ)]+1/2tr[(D_μξ)^†(D^μξ)],withD_μϕ = (∂_μ -i/2g_2τ^a W_μ^a -i/2g_1 B_μ)ϕ, D_μχ = ∂_μχ-i/2g_2[τ^a W_μ^a,χ] -ig_1 B_μχ,D_μξ = ∂_μξ -i/2g_2[τ^a W_μ^a,ξ]. The gauge boson masses are then given by m_W^2 = g_2^2/4(v_ϕ^2+4v_χ^2+4v_ξ^2),m_Z^2 = g_2^2/4cos^2θ_W(v_ϕ^2+8v_χ^2).From Eq. (<ref>), we can see that, in the custodial symmetric case, i.e., v_χ = v_ξ = v_Δ^, ρ^tree=1 is satisfied.In this limit, it is convenient to introduce the angle β relating tothe two VEVs v_ϕ and v_Δ^ by tanβ≡ v_ϕ/(2√(2)v_Δ^).Also, the SM VEV v is identified by v^2 = v_ϕ^2 + 8v_Δ^2 = (√(2)G_F)^-1≃ (246 GeV)^2 with G_F being the Fermi constant.The Higgs boson couplings to gauge bosons are obtained from Eq. (<ref>). As it was already mentioned in the previous section,the SM-like Higgs boson couplings to gauge bosons hVV (V=W,Z) can be larger than the SM prediction:κ_V^≡g_hVV^GM/g_hVV^SM = sinβcosα -2√(2/3)cosβsinα,where g_hVV^GM (g_hVV^SM) is the hVV coupling in the GM model (SM), and α is the mixing angle between the CP-even Higgs bosons definedin Eq. (<ref>).Clearly, κ_V^ can be larger than 1, because of the factor 2√(2/3) in the second term of κ_V^, which comes from the Clebsch-Gordan coefficient of theSU(2)_L triplet representation field. Finally, the Yukawa Lagrangian is given as follows[In the GM model,there is another possible Yukawa term, written as L_L^ciτ^2 χ L_L, which provides Majorana masses for the left-handed neutrinos. This is known as the type-II seesaw mechanism <cit.>.In our paper, we do not take into account this Yukawa coupling, because it is negligibly smallas compared to the Yukawa couplings for the doublet Higgs field given in Eq. (<ref>). ]:L_Y = -y_tQ̅_L^3 iτ^2 ϕ^*t_R^ -y_bQ̅_L^3ϕb_R^ -y_τL̅_L^3ϕ τ_R^ +h.c.,where we only show the third generation fermion part with Q_L^3 = (t,b)_L^T and L_L^3 = (ν_τ,τ)_L^T.The fermion masses are obtained as m_f = y_f v sinβ/√(2) (f=t,b,τ) by taking ⟨ϕ^0 ⟩ = v sinβ/√(2). § INCONSISTENCY IN THE Β-FUNCTION CALCULATION FOR THE CUSTODIAL SYMMETRIC CASE As we already explained in the Introduction,we encounter an inconsistency in the calculation of the RGEs, if we start from the Higgs potential defined in Eq. (<ref>).The source of such inconsistency is the U(1)_Y gauge interaction in the kinetic Lagrangian for the Higgs fields, which explicitly breaks the custodial symmetry at tree level.In fact, the kinetic Lagrangian given in Eq. (<ref>) is not invariant under the transformations Φ→Φ U_R^†(Δ→Δ U_R^†),where U_R is the SU(2)_R transformation matrix, due to the generator τ^3 (t^3). This breaking term affects the scalar potential sector at loop level, i.e., there appear additional operators which break the custodial symmetry and cannot be expressed in terms of Φ and Δ defined in Eq. (<ref>).We note that this breaking effect due to theU(1)_Y gauge interaction is also present in the SM.In that case, however, the custodial symmetry emerges accidentally after writing down all the possible renormalizable terms in the potential, so thatno additional operators can be generated radiatively.Therefore, there is no such inconsistency in the SM. In order to clarify this problem, let us show as an example, the calculation of the one-loop β-functions for the dimensionless couplings ρ_1, ρ_2 and ρ_3 given in Eq. (<ref>).These can be derived by considering the one-loop vertex function for the χ_r^4 term (denoted as Γ̂_χ_r^4) and that for the ξ_r^4 term (denoted as Γ̂_ξ_r^4) as follows (χ_r and ξ_r are introduced in Eq. (<ref>)):Γ̂_χ_r^4 = Γ_χ_r^4^tree + Γ_χ_r^4^1PI,Γ̂_ξ_r^4 = Γ_ξ_r^4^tree + Γ_ξ_r^4^1PI,where we have separately indicated the tree level and the one-loop 1-Particle Irreducible (1PI) diagram contributions.Let us concentrate on the O(g_1^4) terms, so that we do not take into accountthe contribution from the wave function renormalization of the scalar fields which provides O(g_1^2) terms in the β-function. The terms arising from the tree level diagrams turn out to be:Γ_χ_r^4^tree = -6(ρ_1 + ρ_2), Γ_ξ_r^4^tree = -12ρ_3 = -6ρ_1 -3ρ_2 -12δ_3,where we used Eq. (<ref>).From the one-loop 1PI diagrams, we obtain the following contribution to the O(g_1^4) term:Γ_χ_r^4^1PI= 1/16π^218g_1^4lnμ^2 + ⋯, Γ_ξ_r^4^1PI= 0 + ⋯,where we have displayed only terms proportional to lnμ^2 with μ being an arbitrary scale from the dimensional regularization.Because the renormalized vertex function must not depend on μ, the following equation should be satisfiedd/dlnμΓ̂_χ_r^4 = d/dlnμΓ̂_ξ_r^4 = 0,from which we obtain β(ρ_1)|_g_1^4 = -1/16π^26g_1^4 - 4β(δ_3), β(ρ_2)|_g_1^4= 1/16π^212g_1^4+4β(δ_3),where the β-function for a parameter X is defined by β(X) ≡d/dlnμ X. Next, let us consider the χ^++χ^–χ_rχ_r andχ^++χ^-χ^- χ_r vertices. By following the same steps, we get:Γ̂_χ^++χ^–χ_r χ_r = -2ρ_1 + 1/16π^26 g_1^4 lnμ^2+ ⋯,Γ̂_χ^++χ^-χ^- χ_r= -√(2)ρ_2+ ⋯,which giveβ(ρ_1)|_g_1^4= 1/16π^2 6 g_1^4,   β(ρ_2)|_g_1^4= 0.By comparing Eqs. (<ref>) and (<ref>), it is clear that we need a non-vanishing contribution from δ_3, otherwise the β-functions for the same coupling obtained by considering different vertices have not the same form.In particular, compatibility requires:β(δ_3) = -1/16π^23 g_1^4.Conversely, δ_3 vanishes in the custodial symmetric potential (together with all the other δ-terms), thus giving rise tothe mentioned inconsistency in the computation of the β-functions.This issue is not particular of ρ_1 and ρ_2, but rather it iscommon to all the other couplings in the custodial limit.Therefore, in order to obtain a consistent description in terms of the RGEs, we need to introduce the custodial symmetry breaking parameters, orin other words, we need to start from the most general potential given in Eq. (<ref>).In App. <ref>, we present the expressions of the one-loop β-functions for all the 16 parameters of the general potential, those for thethree gauge couplings, and those for the top and bottom Yukawa couplings. In Fig. <ref>, we show the scale dependence of the dimensionless couplings which are evaluated by numerically solving the one-loop RGEs.We here take all the δ_i parameters to be zero at the initial scale μ_0 = m_Z^, namely, we assume the custodial symmetric scenario at μ_0.The three panels display the running behaviour for three differentconfigurations of the initial values of the ρ_1, ρ_2, σ_1 and σ_2 parameters.We can see that the values of δ_i become non-zero atμ > μ_0 and their magnitudes monotonically increase, butthe maximal value of |δ_i| at μ > μ_0 is typically smaller than the maximal magnitude of the other running scalar couplings at the same scale μ.We will further discuss the values of the running δ_i parameters and their relative size to the other running scalar parameters at high energies in the next section.Depending on the initial values, Landau poles can appear at different energy scales, e.g.μ∼ 10^16 and ∼ 10^17 GeV in the center and right panel of Fig. <ref>, respectively.Requiring the absence of Landau poles within a certain energy scale constrains the parameter space.This feature will be discussed in the next section. § NUMERICAL RESULTSIn this section, we discuss some numerical consequences of the evolution in energy of the couplings of the GM model by using the one-loop RGEs.We use the general setup but assuming the custodial SU(2)_V symmetry in the Higgs potential atlow energy in order to keep the electroweak ρ parameter to be unity. This is realized by taking δ_i → 0 as defined in Eq. (<ref>). We first survey the parameter region allowed by the bounds from vacuum stability and triviality as functions of the cutoff scale Λ_cutoff.The former one is defined in such a way that all the inequalities given in Eq. (<ref>) are satisfied up to Λ_cutoff, in which all the dimensionless parameters should be understood as functions of the scale μ.The latter is defined by requiring that there is no Landau pole up to Λ_cutoff.Here, we impose the following criteria as the triviality bound for all the dimensionless parameters:|λ(μ)| ≤ 4π,  |ρ_i(μ)| ≤ 4π,  |σ_j(μ)| ≤ 4π   for  μ_0 ≤μ≤Λ_cutoff,where i=1,… ,5 and j=1,… ,4. The initial scale μ_0 is fixed to be m_Z^.In addition to the vacuum stability and triviality bounds, we also require that all the squared masses for the physical Higgs bosons are positive at μ_0. We want to show the behaviour of the custodial symmetry breaking parameters δ_i at high energies according to the evolution of the parameters as given by the RGEs.In particular, we want to check if the custodial symmetry is only weakly broken at high energies.Since we take the custodial symmetric scenario (δ_i → 0) at μ_0, all the other parameters at μ_0 are determined according to Eq. (<ref>). In the numerical analysis, we choose the following 7 parameters in the potential, with δ_i = 0, as inputs: ρ_1^0, ρ_2^0, σ_1^0, σ_2^0, μ_1^0, μ_3^0, tanβ^0,where X^0 ≡ X(μ_0). Notice that the tadpole conditionsvary by changing μ, so that the value of tanβ also depend on μ. For this reason,we introduce tanβ^0=tanβ(μ_0).The value of λ^0 is determined so as to satisfy m_h=125 GeV. We first consider the case with ρ_1^0=ρ_2^0 = σ_1^0=σ_2^0=0 as a starting point.In Fig. <ref>, we show the allowed parameter space on the μ_1^0–tanβ^0 plane with μ_3^0=0.The black, blue and red shaded regions are allowed from the requirement of Λ_cutoff≥ 10^4, 10^8 and 10^15 GeV, respectively.In this figure, we also show the contour of the scaling factor κ_V^0 whose tree level formula is given in Eq. (<ref>).We see that the large Λ_cutoff is allowed in a limited interval of tanβ^0 depending on the value of μ_1^0.For example, the allowed region with Λ_cutoff≥ 10^15 GeV is obtained in the case with 3≲tanβ^0 ≲ 10 (20≲tanβ^0 ≲ 80) for μ_1^0 = 100 (1000) GeV.This can be understood by the fact that this region requires a smaller value of λ^0 to satisfy m_h=125 GeV as compared to the outside region, which makesthe appearance of the Landau pole at a higher energy scale. We also see that in this configuration, κ_V^0 > 1 is predicted in the most of the parameter region on this μ_1^0–tanβ^0 plane.Finally, we checked that the allowed region from the triviality and the vacuum stability boundsand the behavior of κ_V^0 do not depend so much on the value of μ_3^0 as long as we take μ_3^0 to be not too large togive a negative value of m_H_5^2.In fact, by requiring m_H_5^2 > 0, from Eq. (<ref>) we obtain μ_3^0 < 2μ_1^0tan^2β^0+ v(ρ_2^0cosβ^0+ 3σ_2^0 tanβ^0 sinβ^0 ), Let us show the previously derived bounds in terms of the masses of extra Higgs bosons, namely,the custodial 3-plet mass m_H_3^ and the 5-plet mass m_H_5^ at μ_0.In Fig. <ref>, we show the allowed parameter space on the m_H_3^–m_H_5^ plane with ρ_1^0 =ρ_2^0 =σ_1^0 =σ_2^0 =0and fixed values of tanβ^0, i.e., tanβ^0 = 5 (left) and tanβ^0 = 10 (right).Again, we show the contour of the κ_V^0 value by the green dashed curves.Similarly to Fig. <ref>, the black, blue and red shaded regions are allowed by requiring Λ_cutoff to be larger than 10^4,10^8 and 10^15 GeV, respectively. In this plot, the values of μ_1^0 and μ_3^0 are determined for eachpoint on this plane through Eqs. (<ref>) and (<ref>).As a typical behavior, larger m_H_3^ and m_H_5^ are allowed with higher Λ_cutoff for the case with larger values of tanβ^0.This property can also be seen in Fig. <ref>, where a larger value of μ_1^0 which provides larger values of m_H_3^ and m_H_5^, is allowed for a larger value of tanβ^0.It is also seen that the region with m_H_3^≥ m_H_5^ and κ_V^0>1 is favored by the triviality and the vacuum stability bounds.Now, let us consider the case with the boundaly conditions different from ρ_1^0=ρ_2^0=σ_1^0 = σ_2^0 =0.In Fig. <ref>, each dot is allowed by the triviality and vacuum stability bounds with Λ_cutoff≥ 10^15 GeV in the case ofμ_1^0 = 100 GeV, μ_3^0=0 and tanβ^0 = 5.Here, we scan the four inputs (ρ_1^0,ρ_2^0,σ_1^0,σ_2^0) within the range from -1 to +1. From the upper (lower) panels, we can see the allowed region on the ρ_1^0–ρ_2^0 (σ_1^0–σ_2^0) plane.We checked that the shape of the allowed region does not change so much if we change the values of (μ_1^0,μ_3^0,tanβ^0)as long as they are allowed with Λ_cutoff≥ 10^15 GeV as shown in Fig. <ref>. In this figure, the dots in the left panels show the range of Max(|δ_i|) with i=3,…,7and those in the right panels represent the range of the ratio R defined by R ≡Max(|δ_i|)/Max(|λ|,|ρ_j|,|σ_k|) with j=1,2 and k=1,2.The three different colors show the different ranges of Max(|δ_i|) or R, where the range is indicated inside the figure. We find that at μ=10^14 GeV the value of Max(|δ_i|) can go up to ∼ 0.6 which is ∼ g_2^0, whilethe value of R is smaller than 1. In addition, by looking at the upper-left figure,the value of Max(|δ_i|)∼ 0.6 is only reached by a large |ρ_j^0| value such as ρ_1,2≃ 0.4 with ρ_2,1≃ -ρ_1,2, whilein most of the region with |ρ_j|≲ 0.3, we have a milder value of Max(|δ_i|)≲ 0.3.On the contrary by looking at the upper-right figure, we find that a larger value of R (but still less than 1) is obtainedfor a smaller |ρ_j^0| values.If we look at the lower-left figure, it is difficult to see a correlation between the value of Max(|δ_i|) and the σ_k^0 parameters. This suggests that the value of Max(|δ_i|) is almost determined by ρ_j^0 which are blind in this plane.The upper and lower right figures show a similar behavior of R, i.e., smaller values of |σ_k^0| gives a larger value of R. Summarizing we have checked that, if we vary the initial conditions on ρ_1^0, ρ_2^0, σ_1^0 and σ_2^0 in a natural range,the custodial symmetry breaking parameters δ_i keep values smaller than the other parameters in the potential.Finally, we show the predictions for the masses of the Higgs bosons at μ= 1 TeV to see how the running parameters δ_iaffect the spectrum.In order to calculate the Higgs boson masses at μ > μ_0,we need to evaluate not only the running of the dimensionless couplings, but also that of the dimensionful parameters μ_1,2,3 and m_ϕ,χ,ξ^2 (theirone-loop β-functions are presented in App. <ref>).At a given scale μ, we need to re-impose the tadpole conditions which give three different values of v_ϕ, v_χ and v_ξ.We find that the difference between v_χ and v_ξ at the TeV scale is quite small, i.e. O(1) GeV level, so thatthe mass formulae given in App. <ref> give a good enough approximation to derive the spectrum at μ=1 TeV. In Tab. <ref>, we show the running masses of the SU(2)_V 5-plet Higgs bosons (m̅_H_5^±±^,m̅_H_5^±^,m̅_H_5^0^),the 3-plet Higgs bosons (m̅_H_3^±^,m̅_H_3^0^) and the singlet Higgs boson m̅_H^0 at μ=1 TeV for the three different setsof the initial values at μ_0=m_Z^ written in the first column of the table.For the input values at μ_0, we here fix m_H_5^, m_H_3^ and tanβ^0 instead of inputting μ_1^0, μ_3^0 and tanβ^0, andalso take ρ_1^0=ρ_2^0=σ_1^0=σ_2^0=0. All the three sets are allowed by both triviality and vacuum stability bounds with Λ_cutoff> 10^15 GeV.We note that other choices with non-zero values of the inputs ρ_1,2^0 and σ_1,2^0 do not change so much the mass spectrum at 1 TeVfrom the results given in this table as long as we assume Λ_cutoff> 10^15 GeV.We can see that the breaking of the mass degeneracy among the 5-plet Higgs bosons and that among the 3-plet Higgs bosonsis only given to be O(1) GeV level.In addition, the running mixing angle γ̅ between H_3^± and H_5^± is given to be ∼ 0.1 or smaller. From the above results, we conclude that in the TeV regionthe mass spectrum of the Higgs bosons or, equivalently, the Higgs potential with the custodial SU(2)_V symmetry still provides a good approximation to describe the scenario once the loop effect of the custodial symmetry breaking is taken into account. Before closing this section, let us briefly comment on the signatures of the 5-plet and 3-plet Higgs bosons and the current bounds on their masses at collider experiments.Concerning the 5-plet Higgs bosons, since they do not couple to fermions at tree level,their main decay modes are given by diboson channels, i.e.,H_5^±±→ W^± W^±, H_5^±→ W^± Z and H_5^0→ W^+W^-/ZZ (see, e.g., <cit.>).In Ref. <cit.>, the 95% CL upper limit on the branching ratio (H_5^±±→ W^± W^±)times the cross section of the vector boson fusion process (qq̅' → qq̅' W^± W^±→ qq̅' H_5^±±) has been set using the 8 TeV data at the LHC withan integrated luminosity of 19.4 fb^-1.From this analysis, the 95% CL lower bound on the mass of H_5^±± can be extracted to beabout 300 GeVwhen the triplet VEV v_Δ is taken to be 25 GeV corresponding to tanβ≃ 3.3.These bounds become weaker for smaller (larger) value of v_Δ^ (tanβ)[In Ref. <cit.>, the mass bound on doubly-charged Higgs bosons H^±± decaying into W^± W^±was also derived in the Higgs triplet model whoseHiggs sector is composed of one doublet (Y=1/2) plus one triplet (Y=1) fields.From the pair production and the associated production with a singly-charged Higgs boson, the lower bound on m_H^±± was obtained to be about 84 GeV at 95% CL using the LHC Run-1 data set.A similar bound can be applied to the mass of H_5^±± in the GM model without depending on v_Δ^. ]. In Ref. <cit.>, a search for singly-charged Higgs bosons decaying into the WZ mode via the W and Z boson fusion processhas been performed by using the 13 TeV data set at the LHC with an integrated luminosity of 15.2 fb^-1.The bound is much weaker than that obtained from the search for the W^± W^± channel. In fact, for v_Δ≲ 35 GeV (tanβ≲ 2.3), no bound ia taken on the mass of H_5^± at 95% CL. Concerning the 3-plet Higgs bosons, their phenomenological properties are quite similar to those of singly-charged Higgs bosonsand a CP-odd Higgs boson in the Type-I 2-Higgs doublet model (2HDM) in the alignment limit <cit.>.In our notation, tanβ plays the same phenomenological role as that in the Type-I 2HDM, i.e., the Yukawa couplings for H_3^± and H_3^0 are proportional to β.Therefore, the main decay modes of H_3^± and H_3^0 are typically tb and tt̅,respectively, as long as these are kinematically allowed.For lighter 3-plet Higgs bosons below the tb and tt̅ threshold,H_3^±→τν and H_3^0 → bb̅/ττ can be dominant, respectively.A dedicated study for the phenomenology of the 3-plet Higgs boson have been done in Ref. <cit.>. § CONCLUSIONSWe have discussed the high energy behavior of the GM model, particularly shedding light on the effect of the custodial symmetry breakingby using the one-loop RGEs.In order to obtain a consistent form of the one-loop β-functions,we start from the most general Higgs potential without the custodial SU(2)_V symmetry,which is described by 16 independent parameters in the case of CP-conservation.The custodial symmetric version of the potential is obtained by taking all the 7 δ_i parameters, describing the breaking of the custodial symmetry, to be zero. We then numerically derived the evolution with energy of δ_i under the assumption that they all vanishat μ_0 = m_Z^ as initial condition.First, we surveyed the parameter region allowed by the triviality and the vacuum stability constraints as a function of the cutoff scale Λ_cutoff.Requiring the model to be consistent up to a high energy scale, e.g. Λ_cutoff≥10^15 GeV, we obtain a strong correlation between the dimensionful trilinear coupling μ_1 and tanβ and between the mass of the custodial 5-plet Higgs bosonand that of the 3-plet Higgs boson at μ=μ_0.We then extracted the typical size of the δ_i parameters at high energies.We found that, in the configurations with Λ_cutoff≥ 10^15 GeV,the maximal value of |δ_i| can be up to ∼ 0.6 at μ = 10^14 GeV, and it is smaller than the maximal value of theinput parameters in the potential (λ, ρ_1,2 and σ_1,2).In addition, in order to quantify the effects of the custodial symmetry breaking, we derived the running masses of the Higgs bosons and the running mixing angle γ̅ between the H_3^± and H_5^± at μ = 1 TeV. We found that the deviation from the custodial symmetric limit is quite small, namely, the mass splitting among the Higgs bosons belonging to the same SU(2)_V multiplet is of the order of 1 GeV, and sinγ̅∼ 0.1.This means that once custodial symmetry is realized at low energy (m_Z^ scale),it also approximately holds at the TeV scale which is now being surveyed at the LHC experiments.§ RELATIONS AMONG SCALAR FIELDS Relations between the fields Φ and Δ defined in Eq. (<ref>) and ϕ, χ and ξ defined in Eq. (<ref>) are given as tr(Φ^†Φ)= 2 ϕ^†ϕ,tr(Δ^†Δ)= 2tr(χ^†χ) + tr(ξ^2), tr(Φ^†τ^a/2Φτ^b/2)(P^†Δ P)^ab = -1/√(2)ϕ^†ξϕ-1/2[ϕ^T (iτ_2) χ^†ϕ + h.c.],tr(Δ^† t^a Δ t^b)(P^†Δ P)^ab = 6√(2)tr(χ^†χξ), [tr(Δ^†Δ) ]^2= 4[tr(χ^†χ)]^2 + 2tr(ξ^4) + 4 tr(χ^†χ)tr(ξ^2),tr(Δ^†ΔΔ^†Δ)=6[tr(χ^†χ)]^2 -4tr(χ^†χχ^†χ) +2tr(ξ^4)+ 4 tr(χ^†ξ)tr(ξχ), tr(Φ^†τ^a/2Φτ^b/2) tr(Δ^† t^aΔ t^b)= -ϕ^†ϕtr(χ^†χ) + 2trϕ^†χχ^†ϕ +√(2)(ϕ^†χξϕ^c + h.c.).We note tr(ξ^4)=[tr(ξ^2)]^2/2.§ MASS FORMULAELet us present the mass formulae for the Higgs bosons of the GM model with the general potential defined in Eq. (<ref>) andv_χ=v_ξ = v_Δ^. The mass of the doubly-charged scalar states χ^±± (≡ H_5^±±) is given by m_ H_5^±±^2 = v/4[4√(2)s_β t_βμ_2 -2 c_βμ_3 - v(c_β^2ρ_2 +2s_β^2σ_2 +√(2)s_β^2σ_4) ]. For the singly-charged scalar states,the weak eigenstates (ξ^±, ϕ^±, χ^±) are related to the mass eigenstates(G^±, H_3^±, H_5^±), with G^± being the Nambu-Goldstone (NG) bosons to be absorbed into the longitudinal components of the W^± bosons,by the following orthogonal transformation:[ ξ^±; ϕ^±; χ^± ] =[1/√(2) 0 -1/√(2); 0 1 0;1/√(2) 01/√(2) ] [c_βs_β0;s_β -c_β0;001 ] [100;0c_γ -s_γ;0s_γc_γ;][ G^±; H_3^±; H_5^± ].The mixing angle γ and the mass eigenvaluesm_ H_3^±^2 and m_ H_5^±^2 for the H_3^± and H_5^± states, respectively, are expressed by m_ H_3^±^2 = (M_±^2)_11 c_γ^2 + (M_±^2)_22 s_γ^2+ 2(M_±^2)_12 c_γs_γ, m_ H_5^±^2 = (M_±^2)_11 s_γ^2 + (M_±^2)_22 c_γ^2- 2(M_±^2)_12 c_γs_γ,tan 2γ = 2(M_±^2)_12/(M_±^2)_11-(M_±^2)_22,where(M_±^2)_11 = v/8[4/c_β(μ_1 + √(2)μ_2) -v(σ_2 + √(2)σ_4 )], (M_±^2)_22 = v/8[4s_β t_β (μ_1 + √(2)μ_2) -4c_βμ_3-v(s_β^2 σ_2 +5√(2)s_β^2σ_4 -2c_β^2ρ_5 ) ], (M_±^2)_12 = v/8[-4t_β(μ_1 - √(2)μ_2) -vs_β(σ_2 - √(2)σ_4)]. For the CP-odd scalar states,the weak eigenstates (χ_i, ϕ_i) are related to the mass eigenstates(G^0, H_3^0), with G^0 being the NG boson to be absorbed into the longitudinal component of the Z boson,by the following orthogonal transformation:[ χ_i; ϕ_i ] =[c_β -s_β;s_βc_β ][ G^0; H_3^0; ].The squared mass m_ H_3^0^2 for H_3^ 0 is expressed by m_ H_3^0^2 = √(2)μ_2 v/c_β - √(2)/4v^2σ_4. Finally, for the CP-even Higgs states, we define the following basis:[ ξ_r; ϕ_r; χ_r ]= ( [1/√(3) 0 -√(2/3); 0 1 0;√(2/3) 01/√(3) ]) [ H̃; h̃; H̃_5^0 ],where the three states H̃, h̃ and H̃_5^0 are not mass eigenstates in general.The squared mass matrix elements, in the basis (H̃, h̃ and H̃_5^0), are expressed as(M_even^2)_11 = v/6[2s_β t_β (μ_1 +2√(2)μ_2) +3/2c_βμ_3 + vc_β^2(2ρ_1+2ρ_2+ρ_3+2ρ_4)],(M_even^2)_22= 2s_β^2 v^2 λ, (M_even^2)_12= vs_β/√(6)[-μ_1 -2√(2)μ_2 + vc_β(σ_1+σ_2+σ_3+√(2)σ_4) ], (M_even^2)_13= v/6√(2)[-4s_β t_β(μ_1 - √(2)μ_2) +vc_β^2(2ρ_1+2ρ_2-2ρ_3-ρ_4)], (M_even^2)_23= vs_β/2√(6)[2√(2)μ_1-4μ_2 + vc_β (√(2)σ_1 + √(2)σ_2 -2√(2)σ_3-σ_4) ], (M_even^2)_33= v/6[ 2√(2)s_β t_β (√(2)μ_1 + μ_2) -3c_βμ_3 +vc_β^2(ρ_1+ρ_2+2ρ_3-2ρ_4)-9/√(2)vs_β^2σ_4].The relation of the basis (H̃,h̃,H̃_5^0) to the mass eigenstates is obtained by an orthogonal transformation: [ H̃; h̃; H̃_5^0 ]= R_even[ H; h; H_5^0 ], where R_even can be expressed in terms of three independent mixing angles. In the custodial symmetric limit defined in Eq. (<ref>), we obtain (M_±^2)_12 = (M_even^2)_13 = (M_even^2)_23 = 0,(M_±^2)_22(=m_ H_5^±^2) = (M_even^2)_33(=m_ H_5^0^2) = m_ H_5^±±^2, (M_±^2)_11(=m_ H_3^±^2) = m_ H_3^0^2.Therefore, we can clearly reproduce the custodial symmetric results, namely,( H_5^±±, H_5^±, H_5^0) and ( H_3^±, H_3^0)are the custodial 5-plet (H_5^±±,H_5^±,H_5^0) and the 3-plet (H_3^±,H_3^0), respectively.Because of the no mixing displayed in Eq. (<ref>), the Higgs bosons belonging to the different custodial multiplets are not mixed with each other.In addition, the degeneracy of masses for Higgs bosons belonging to the same custodial multiplet follows: m_H_5^^2= v/4[4s_β t_βμ_1 - 2c_βμ_3-v(c_β^2ρ_2 + 3s_β^2σ_2)], m_H_3^^2= v/c_βμ_1 -v^2/4σ_2.For the CP-even Higgs bosons, the 3× 3 matrix R_even becomes the block diagonal form as R_even = diag(R(α),1) whichis described by only one mixing angle α.We thus express the custodial singlet Higgs bosons H and h by the linear combination of the H̃ and h̃ states as:[ H̃; h̃;] = R(α) [ H; h ]. The two squared mass eigenvalues and the mixing angle α are expressed as m_H^2 = (M_even^2)_11 c_α^2 + (M_even^2)_22 s_α^2+ 2(M_even^2)_12 c_αs_α, m_h^2 = (M_even^2)_11 s_α^2 + (M_even^2)_22 c_α^2- 2(M_even^2)_12 c_αs_α,tan 2α = 2(M_even^2)_12/(M_even^2)_11-(M_even^2)_22,where (M_even^2)_11 = v/8[ 8s_β t_βμ_1 + 2c_βμ_3 +vc_β^2(6 ρ_1 + 7ρ_2)], (M_even^2)_22= 2v^2s_β^2 λ, (M_even^2)_12= √(6)/8vs_β[-4μ_1 + vc_β(2σ_1+3σ_2) ]. § Β-FUNCTIONSIn this Appendix, we give the analytic expressions of the one-loop β-functions for all the model parameters.The definition of the β-function is given in Eq. (<ref>). The β-functions for the 3 gauge couplings g_i (i=1,2,3) and the Yukawa couplings for the top (y_t) and bottom (y_b) quarks are given byβ(g_3) =g_3^3/16π^2(-7), β(g_2)=g_2^3/16π^2(-11/6), β(g_1)=g_1^3/16π^247/6,β(y_t) =1/16π^2[9/2y_t^3+3/2y_b^3 -y_t(8g_3^2+9/4g_2^2+17/12g_1^2)],β(y_b) =1/16π^2[9/2y_b^3+3/2y_t^3 -y_b(8g_3^2+9/4g_2^2+5/12g_1^2)]. For the 10 dimensionless parameters in the potential given in Eq. (<ref>), we have 16π^2β(λ) =3/8(3 g_2^4+2g_2^2g_1^2+g_1^4 ) +24λ^2-6 (y_t^4+y_b^4) +3 σ_1^2+3 σ_1 σ_2+5 σ_2^2/4+6 σ_3^2+2 σ_4^2 -3λ(g_1^2+3 g_2^2-4y_t^2-4y_b^2), 16π^2β(ρ_1)= 15 g_2^4-12 g_1^2 g_2^2+6 g_1^4 +28 ρ_1^2+24 ρ_1 ρ_2+6 ρ_2^2+6 ρ_4^2+4 ρ_4 ρ_5+3ρ_5^2+2 σ_1^2+2 σ_1 σ_2-12ρ_1( g_1^2 + 2g_2^2), 16π^2β(ρ_2)=24 g_1^2 g_2^2-6 g_2^4+24ρ_1 ρ_2+18 ρ_2^2-2 ρ_5^2+σ_2^2-12 ρ_2 (2g_2^2 + g_1^2), 16π^2β( ρ_3)=2 (3 g_2^4+22 ρ_3^2+3 ρ_4^2+2ρ_4 ρ_5+ρ_5^2+2 σ_3^2-12 g_2^2 ρ_3), 16π^2β( ρ_4)=2 [3 g_2^4+ρ_4 (8 ρ_1+6 ρ_2+10 ρ_3+4ρ_4 )+2 ρ_5 (ρ_1+ρ_2+ρ_3)+ρ_5^2+2 σ_1 σ_3+σ_2 σ_3+σ_4^2-3ρ_4( g_1^2 +4 g_2^2)], 16π^2β(ρ_5)=2 [3 g_2^4+ρ_5 (2 ρ_1+4 ρ_3+8 ρ_4+5ρ_5)-σ_4^2 -3ρ_5(4 g_2^2 +g_1^2)], 16π^2β(σ_1)=3 g_1^4-6g_1^2g_2^2+6 g_2^4 +2 σ_1 (6 λ +8 ρ_1+6 ρ_2+2σ_1)+2 σ_2 (2 λ +3 ρ_1+ρ_2) +2(6 ρ_4 σ_3+2ρ_5 σ_3+σ_4^2)+σ_2^2 -3/2σ_1(5g_1^2 +11g_2^2-4y_t^2-4y_b^2), 16π^2β(σ_2)= 12g_1^2g_2^2 +4σ_2 [λ +ρ_1+2 (ρ_2+σ_1)+σ_2] +4 σ_4^2 -3/2σ_2(5g_1^2+11g_2^2 -4 y_t^2-4y_b^2),16π^2β(σ_3)= 3 g_2^4 + 2σ_3(6 λ +10 ρ_3+4 σ_3) + (3 ρ_4+ρ_5) (2σ_1+σ_2)+4 σ_4^2 -3/2σ_3( g_1^2 +11 g_2^2-4y_t^2-4y_b^2), 16π^2β(σ_4)=σ_4/2[4 (2 λ +2ρ_4-ρ_5 + 2σ_1+2σ_2+4σ_3)-3(3g_1^2+11 g_2^2-4y_t^2-4y_b^2)]. Finally, the β-functions for the dimensionful trilinear (μ_1,2,3) and bilinear (m^2_ϕ,χ,ξ) couplings are given by16π^2β(μ_1) = μ_1/2(8λ +16 σ_3 -3g_1^2 -21g_2^2 +12y_t^2 + 12y_b^2)+16μ_2σ_4 -2μ_3σ_2, 16π^2β(μ_2) =4μ_1σ_4+μ_2/2(8λ + 8σ_1 + 12σ_2 -9g_1^2 -21g_2^2 +12y_t^2+12y_b^2)-2μ_3σ_4, 16π^2β(μ_3) =-2μ_1σ_2 -8μ_2σ_4 + 2μ_3(2ρ_1+4ρ_2+4ρ_4-2ρ_5-3g_1^2-9g_2^2), 16π^2β(m_ϕ^2) =3/2m_ϕ^2(8λ -g_1^2 - 3g_2^2 +4y_t^2 +4y_b^2) +3m_χ^2(2σ_1 + σ_2)+12m_ξ^2σ_3+ 3μ_1^2 +12μ_2^2, 16π^2β(m_χ^2) = 2m_ϕ^2(2σ_1 + σ_2)+2m_χ^2(8ρ_1 +6 ρ_2 - 3g_1^2 - 6g_2^2)+4m_ξ^2(3ρ_4 +ρ_5)+4μ_2^2 + 2μ_3^2, 16π^2β(m_ξ^2) = 4m_ϕ^2σ_3 +2m_χ^2(3ρ_4+ρ_5) +4m_ξ^2(5ρ_3 - 3g_2^2 )+μ_1^2+μ_3^2. 1higgsG. Aad et al. [ATLAS and CMS Collaborations],JHEP 1608 (2016) 045 [arXiv:1606.02266 [hep-ex]].pdgC. Patrignani et al. [Particle Data Group],Chin. Phys. C 40, no. 10, 100001 (2016).HHG J. F. Gunion, H. E. Haber, G. L. Kane and S. Dawson,Front. Phys.80, 1 (2000). Logan-uniK. Hally, H. E. Logan and T. Pilkington,Phys. Rev. D 85, 095017 (2012) [arXiv:1202.5073 [hep-ph]]; K. Earl, K. Hartling, H. E. Logan and T. Pilkington,Phys. Rev. D 88, no. 1, 015002 (2013) [arXiv:1303.1244 [hep-ph]]. hVV1A. Falkowski, S. Rychkov and A. Urbano,JHEP 1204, 073 (2012).[arXiv:1202.1532 [hep-ph]].hVV2S. Chang, C. A. Newby, N. Raj and C. Wanotayaroj,Phys. Rev. D 86, 095015 (2012).[arXiv:1207.0493 [hep-ph]].hVV3S. Kanemura, M. Kikuchi and K. Yagyu,Phys. Rev. D 88, 015020 (2013) [arXiv:1301.7303 [hep-ph]].GM1H. Georgi and M. Machacek,Nucl. Phys. B 262, 463 (1985).GM2M. S. Chanowitz and M. Golden,Phys. Lett. B 165, 105 (1985).Logan-genH. E. Logan and V. Rentala,Phys. Rev. D 92, no. 7, 075011 (2015) [arXiv:1502.01275 [hep-ph]].GVW2J. F. Gunion, R. Vega and J. Wudka,Phys. Rev. D 42, 1673 (1990). God S. Godfrey and K. Moats,Phys. Rev. D 81, 075026 (2010) [arXiv:1003.3033 [hep-ph]].GM-LHC C. W. Chiang, A. L. Kuo and K. Yagyu,JHEP 1310, 072 (2013) [arXiv:1307.7526 [hep-ph]]; C. Englert, E. Re and M. Spannowsky,Phys. Rev. D 87, no. 9, 095014 (2013) [arXiv:1302.6505 [hep-ph]];C. W. Chiang, S. Kanemura and K. Yagyu,Phys. Rev. D 90, no. 11, 115025 (2014) [arXiv:1407.5053 [hep-ph]];C. W. Chiang and K. Tsumura,JHEP 1504, 113 (2015) [arXiv:1501.04257 [hep-ph]]; C. W. Chiang, A. L. Kuo and T. Yamada,JHEP 1601, 120 (2016) [arXiv:1511.00865 [hep-ph]];J. Chang, C. R. Chen and C. W. Chiang,arXiv:1701.06291 [hep-ph].CY C. -W. Chiang and K. Yagyu,JHEP 1301, 026 (2013) [arXiv:1211.2658 [hep-ph]].GM-ILC C. W. Chiang, S. Kanemura and K. Yagyu,Phys. Rev. D 93, no. 5, 055002 (2016) [arXiv:1510.06297 [hep-ph]].GVWJ. F. Gunion, R. Vega and J. Wudka,Phys. Rev. D 43, 2322 (1991).LoganK. Hartling, K. Kumar and H. E. Logan,Phys. Rev. D 90, no. 1, 015007 (2014) [arXiv:1404.2640 [hep-ph]].type2 T. P. Cheng and L. F. Li,Phys. Rev.D 22, 2860 (1980); J. Schechter and J. W. F. Valle,Phys. Rev.D 22, 2227 (1980);G. Lazarides, Q. Shafi and C. Wetterich,Nucl. Phys.B 181, 287 (1981); R. N. Mohapatra and G. Senjanovic, Phys. Rev.D 23, 165 (1981); M. Magg and C. Wetterich,Phys. Lett.B 94, 61 (1980). ww V. Khachatryan et al. [CMS Collaboration],Phys. Rev. Lett.114, no. 5, 051801 (2015)[arXiv:1410.6315 [hep-ex]].Const-WW S. Kanemura, K. Yagyu and H. Yokoya,Phys. Lett. B 726, 316 (2013) [arXiv:1305.2383 [hep-ph]];S. Kanemura, M. Kikuchi, K. Yagyu and H. Yokoya,Phys. Rev. D 90, no. 11, 115018 (2014) [arXiv:1407.6547 [hep-ph]];S. Kanemura, M. Kikuchi, H. Yokoya and K. Yagyu,PTEP 2015, 051B02 (2015) [arXiv:1412.7603 [hep-ph]].wzCMS Collaboration [CMS Collaboration],CMS-PAS-HIG-16-027.2hdmG. C. Branco, P. M. Ferreira, L. Lavoura, M. N. Rebelo, M. Sher and J. P. Silva, Phys. Rept. 516, 1 (2012), arXiv:1106.0034 [hep-ph].
http://arxiv.org/abs/1704.08512v1
{ "authors": [ "Simone Blasi", "Stefania De Curtis", "Kei Yagyu" ], "categories": [ "hep-ph" ], "primary_category": "hep-ph", "published": "20170427112420", "title": "Effects of custodial symmetry breaking in the Georgi-Machacek model at high energies" }
firstpage–lastpage Unity of pomerons from gauge/string duality [ December 30, 2023 ===========================================We introduce a new Bayesian spectral line fitting technique capable of obtaining spectroscopic redshifts for millions of galaxies in radio surveys with the Square Kilometere Array (SKA). This technique is especially well-suited to the low signal-to-noise regime that the redshifted 21-cm emission line is expected to be observed in, especially with SKA Phase 1, allowing for robust source detection.After selecting a set of continuum objects relevant to large, cosmological-scale surveys with the first phase of the SKA dish array (SKA1-MID), we simulate data corresponding to their line emission as observed by the same telescope. We then use the nested sampling code to find the best-fitting parametrised line profile, providing us with a full joint posterior probability distribution for the galaxy properties, including redshift. This provides high quality redshifts, with redshift errors Δ z / z <10^-5, from radio data alone for some 1.8^6 galaxies in a representative 5000^2 survey with the SKA1-MID instrument with up-to-date sensitivity profiles. Interestingly, we find that the SNR definition commonly used in forecast papers does not correlate well with the actual detectability of an line using our method. We further detail how our method could be improved with per-object priors and how it may be also used to give robust constraints on other observables such as the mass function. We also make our line fitting code publicly available for application to other data sets.large-scale structure of Universe — radio continuum: galaxies— techniques: spectroscopic § INTRODUCTION The Square Kilometre Array (SKA)[<http://www.skatelescope.org>] will perform the kind of deep, wide surveys which are capable of delivering world-leading cosmological constraints from radio wavelengths using probes including galaxy clustering <cit.>, weak gravitational lensing <cit.>, intensity mapping <cit.> and ultra-large scale tests of general relativity and Gaussianity <cit.>. However, for those probes which require redshifts for individual sources, good redshift estimates may be difficult to obtain. The emission mechanism for the ordinary star-forming galaxies expected to form the bulk of sources in SKA cosmology catalogues is from synchrotron which has a uniform spectral slope of -0.7 across a large frequency range. This means the galaxies' spectra as measured by SKA will be almost completely featureless, with redshift and flux entirely degenerate over the relevant frequency range. The expectation in previous analyses has been that redshifts could be obtained in two ways: spectroscopic redshifts from high-significance detections of the 21-cm line emission from the sources, or photometric redshifts from cross-matching the radio continuum sources with overlapping surveys at optical and near-infrared (nIR) wavelengths. However, the number of catalogue matches may be small (as found in , though see also the larger matching fractions found in ), and photometric redshifts may be subject to significant uncertainties, biases and catastrophic outliers, so it will be important to extract as much information as possible from the 21-cm line. The lack of redshift information for e.g. tomographic binning of sources in weak lensing is a potentially limiting factor on the cosmological power of SKA surveys.Here we investigate the ability of a Bayesian model-fitting approach to estimate the redshift of radio continuum sources using the 21-cm emission line and apply the technique to simulated data from the first phase of the SKA mid-frequency dish array (SKA1-MID). We use the continuum detection of a galaxy as prior information, reducing the redshift estimation problem to fitting a six parameter model to a one dimensional data set. A similar approach has been implemented by <cit.>, who fit absorption line profiles using Gaussian mixture models. For emission line fitting, thepackage <cit.> performs source finding using threshold methods on full three dimensional data cubes.Using the technique described here, we find high quality emission line redshifts (with high spectroscopic precisions and with outlier fractions at least as good as typical photometric redshift methods) may be obtained for around 10 per cent of the star-forming galaxies in a fiducial SKA1 continuum cosmology survey. We compare the catalogue resulting from this detection algorithm to that resulting from the Signal-to-Noise Ratio (SNR) definition and cut from<cit.>. We find a comparable number of sources, with a comparable redshift distribution, remain when the full detection algorithm is simulated, although note that the exact set of sources differs significantly between the two catalogues.This paper is organised as follows: in <ref> we introduce the Bayesian method used to fit a six parameter model to the line data; then in <ref> we describe the creation of the simulated data catalogue from the <cit.> simulation and the relevant observation parameters for SKA1-MID surveys in Band 1 and Band 2. <ref> then describes the results of this procedure, with <ref> detailing potential improvements and extensions and <ref> describing our conclusions.We also provide our code for simulation of SKA1-MID line catalogues, and our analysis code at <http://github.com/MichelleLochner/radio-z> for application of our method to other simulations and data and to enable comparisons of the performance other methods on our simulated data set.§ A BRIEF INTRODUCTION TO BAYESIAN STATISTICS The problem of spectral line fitting has two components: one is to robustly determine whether or not a spectral line is present, the other is to then find the best fitting parameters of the line model and their uncertainties. Both of these can be done elegantly within a Bayesian statistics framework. We therefore give here a very brief introduction to Bayesian statistics, referring the reader to (for example) <cit.> for a more in-depth review.Bayes' theorem is given by:P(|) = P() P( |)/P(),whererepresents the data andrepresents the vector of parameters for the chosen model. P() is called the prior and is the probability distribution of the parameters, before any data is taken. This is usually derived from physical constraints for the problem at hand, for example ensuring a density parameter remains positive, but can also include constraints from previous experiments. P(|) is the likelihood, the probability of the data, given a set of values for . This informs how likely a given set of parameters is, in light of the data at hand. P(|) is the posterior and is generally the quantity scientists are interested in: what is my degree of belief in a chosen theory (with given parameters), now that I have taken some new data? Lastly, P() is called the evidence or marginal likelihood and is a normalisation constant that is crucial to model selection (see below), but unimportant for parameter inference. It can thus be seen that Bayes theorem is prescription of how to update one's degree of belief in a particular theory, using a new set of data.Bayesian inference then proceeds to determine the full posterior over all parameters in the model. The best fitting values for the parameters are those that maximise the posterior. However to determine the uncertainty on each parameter, all other parameters must be marginalised over to produce a one-dimensional posterior. This marginalisation results in an integral over parameter space:P(ϕ|) = ∫ P(ϕ, ψ|) dψ,where ϕ is the parameter of interest and ψ is the vector of remaining parameters to be marginalised over. In the vast majority of problems, this integral cannot be solved analytically. Fortunately, several numerical techniques exist, including Markov Chain Monte Carlo <cit.> and Nested Sampling <cit.>, that make Bayesian parameter inference tractable.Bayesian statistics also provides a framework in which to perform robust model selection. The important quantity for this is the evidence which is computed, for a given model , by integrating over the entire parameter space:P(|) = ∫ P( | , ) P(|) d.The evidence naturally incorporates an Occam's razor effect, penalising models with large prior volumes (for example with many parameters) unless they provide a significantly improved fit to the data. The evidence is used in model selection when comparing two models, _1 and _2, by computing the ratio of posterior odds (which is simply further application of Bayes' theorem):P(_1|)/P(_2|) = P(|_1)/P(|_2)P(_1)/P(_2).Given that there is usually no strong reason to a priori prefer one model over another, the model priors P(_1) and P(_2) are often set to be equal, making the most important quantity the ratio of evidences for each model. This is known as the Bayes factor:B = P(|_1)/P(|_2).The Bayes factor can be directly used to select one model over another by simply comparing if the evidence is greater for one over the other. The Jeffrey's scale <cit.> can be used to decide how strong the evidence is for one model over another, where ln(B)>5 constitutes strong preference for _1 and ln(B)>0 weak.§ BAYESIAN MODEL FITTING FOR H I LINE PROFILES We now illustrate how Bayesian statistics can be used to solve the line detection and characterisation problem. We consider the case in which a galaxy has been detected in a radio continuum image, providing information on its sky location. This provides us with prior information on spectral line data: rather than searching a full image cube for the emission line from this galaxy, we may search a one dimensional data vector corresponding to this sky location (with filtering applied to leave the source spatially unresolved) and consisting of a series of flux measurements as a function of frequency (ν_i). We further assume that the 21-cm emission from the galaxy can be well modelled by the six parameter double horn profile Ψ(ν | z, Ψ^ obs_ max, Ψ^ obs_0, w^ obs_ peak, w^ obs_50, w^ obs_20) described in <cit.>, an example of which is shown in <ref>. The two Ψ^ obs parameters give the line heights at the maximum (i.e. the peak of the horns) and the centre of the line (at the bottom of the dip between the horns) respectively, and the three w^ obs parameters give the width of the line (which is a function of the galaxy's inclination angle and rotational velocity) at 100, 50 and 20 per cent of the peak height. We also assume here that the continuum component of the emission has been successfully fully removed, meaning we are only fitting the line emission. We do however note that our method could be readily extended to simultaneously fit a more complicated model including continuum or more complicated line profiles such as the `Busy function' described in <cit.>.With the full six parameters defined asand the latter five (i.e. excluding z) as , we then use the nested sampling code <cit.> to map the full joint posterior distribution P( | , ) given the data vector and model hypothesisthat there is an emission line present in the data. The redshift probability distribution for each source is then given by the marginalisation of this posterior distribution over the five shape parameters:P(z | , ) = ∫P( | , ).Here we are not merely interested in the case where the emission line is significantly detected above some threshold, providing a `spectroscopic' redshift with extremely narrow P(z), but in the properties of the full set of P(z) for all continuum detected sources. Even relatively broad P(z) (such as those often provided by photometric methods at optical and infrared wavelenghts) can still provide extremely useful information in terms of redshift binning for cosmological surveys, being summed to provide an estimate of the redshift number distribution of sources n(z). For the six fitted parameters we adopt broad, uninformative priors on the set of parameters we fit, detailed in <ref> and set by the range depicted in the full simulated input catalogue from <cit.>. The python code takes around 10 minutes to run on a normal laptop for an average galaxy and is trivially parallelisable at the catalogue level. §.§ Identifying false detections with the evidenceObserving with the SKA (and indeed all radio telescopes) takes place within frequency bands of finite width. For sources which are detected in continuum but whose 21-cm lines are redshifted to outside of the observing band the data vector (ν_i) will contain only noise, and other faint sources will be buried deep within the noise. In order to attempt to remove spurious detections of emission lines in noise-only data, we make use of Bayesian model selection, as outlined in <ref>. The two models we compare are the six-parameter profile outlined above, , and a pure noise model, , where the signal is consistent with zero. We compute the Bayes factor, B, using <ref>, to eliminate spurious line detections and have control over the purity of the sample we produce.In <ref> we present results with a variety of cuts on B, in order to exclude spurious noise detections, but we emphasise such cuts are not strictly necessary, and that B could be used as a weight factor in derived analyses which make use of the P(z) obtained for all continuum sources, such as in estimating number density distributions n(z). § SIMULATED OBSERVATIONAL DATA Before the advent of the SKA, a number of precursor and pathfinder telescopes will operate, with some performing emission line surveys, such as LADUMA on MeerKAT <cit.> and WALLABY on ASKAP <cit.>. Here we consider the performance of SKA1 surveys, in the understanding that the simulated data and populations will be similar for precursor surveys (MeerKAT will be integrated into SKA-MID and shares similar noise performance on each individual antenna). §.§ Simulated cataloguesIn order to generate a realistic population of sources with 21-cm line emission on which to try our technique, we make use of the set of galaxy line profiles from the S3-SAX population as described in <cit.>, with intrinsic galaxy properties coming from a semi-analytic prescription for emission painted on top of the Millennium N-body simulation <cit.>. In S3-SAX (which we will refer to as for clarity), 21-cm line emission is parametrised according to a simple six parameter, symmetric double-horn profile (e.g. the magenta line in <ref>). The full catalogue contains models for emission from some 3^7 star-forming galaxy sources with masses down to 1.85^4 M_⊙, but here we are only interested in those sources which have been identified in a continuum observation and hence require redshift information. From a continuum galaxy sample we take the star-forming galaxies from the S3-SEX catalogue of <cit.> (which we will refer to as ) and rescale and cut them according to the requirements for weak lensing cosmology in SKA1-MID Band 2, as specified in <cit.>. This gives us a sample with a number density of resolved objects of n_ gal = 2.7arcmin^-2 and a median redshift z_ m = 1.1. We model this cut in the catalogue from as a redshift dependent cut on the mass M_ HI:M_ HI > z × 10^9.5.This provides us with a sample with the same median redshift as the continuum z_ m = 1.1; the exact number density is not however replicated by this cut (and were constructed independently and do not contain matched objects). We have experimented with different versions of <ref> and found the most representative results (in terms of redshift and mass distributions) were obtained by matching the median redshifts. We therefore quote our results below as fractions of sources in and normalise them relative to the numbers in . For the sample selected, we expect the size (as given by ) to be ∼ 2 ± 0.8arcsec with ∼ 0.03 sources per beam at the lowest observing frequency (where the resolution will be ∼ 2arcsec), meaning that some sources will be unresolved in the observation but problems from confusion (i.e. multiple sources within the same beam) will be rare.Previously, <cit.> define the signal-to-noise ratio (SNR) of each galaxy as:SNR_ vel = v_HI/w_peak^obs√(w_peak^obs/δ V)/S_rms(ν),where v_HI is the velocity-integrated line flux in Jy, w_peak^obs is the width between the peaks of the double-horned profile in kms^-1 and δ V and S_rms (in Jy) are the frequency resolution and noise level of the experiment. They then cut the sample accordingly and use these sources withand SNR_vel > 10 as their detected samples of galaxies to use in forecasting cosmological constraints. This is very much not a detection algorithm and hence does not model effects such as false detections or catastrophic failures, and it is interesting to see if the objects regarded as `detected' by this approach are replicated by our method.§.§ Experiments considered We simulate the observation of the objects in the continuum detected population by a survey using the first phase of the SKA mid-frequency dish array (SKA1-MID). <cit.> specify galaxy redshift surveys of 5,000 using 10,000 hours of observing time in Band 1 and Band 2 of SKA1-MID. For each band we model the relative noise profile across the band from <cit.> (see <ref> for a full description), calculating S^ref_rms, the sensitivity at ν = 1GHz for comparison purposes (in the simulations we model the full frequency-dependent noise profile). The Band 2 survey is sensitive to sources with true redshift 0 < z < 0.58 (corresponding to a frequency range 950 - 1420MHz) down to a reference noise level of S^ref_rms = 187μJy. The Band 1 survey has a frequency coverage of 350 - 1050MHz, corresponding to a redshift range of 0.35 < z < 3.06 but with a higher noise level (driven mostly by the increase in sky temperature at lower frequencies) of S^ref_rms = 315μJy.We assume δν = 10kHz frequency channels covering this bandwidth, giving a total of 50,000 channels. As each data point corresponds to a correlation of different antenna pairs at different time points, the per-frequency channel noise can be modelled as uncorrelated and Gaussian with the relevant S_rms(ν) <cit.>. An example of the data sets considered can be seen in <ref> which shows an SKA output both across the full Band 2 (with velocities given with respect to the rest frame velocity for the 21-cm line) and zoomed in around the centre of the line profile.§ RESULTS AND DISCUSSION For each of the Band 1 and Band 2 surveys described in <ref>, we find all of the lines in the input catalogue which have SNR_ vel > 1 and estimate the posterior distributions forusing . We assume that a line with SNR_ vel≤ 1, is essentially undetectable and so do not fit these to save on computational resources. <ref> strongly suggests that B and SNR_ vel are correlated enough that we do not miss out on a large number of detectable lines. This fitting results in a full joint posterior distribution for all of the source parameters; here we present results for the redshift, derived from the estimate of the line centre velocity v_0 when marginalised over the other five parameters.In summary, <ref> shows the numbers of detected sources (black line, right axes) and outliers (coloured lines, left axes) as a function of the sample cut on B, <ref> shows two dimensional histograms of the recovered redshifts, which are shown in one dimension and compared to previous results in <ref>. <ref> gives numbers of redshifts available for the two experiments considered, along with outlier performance.§.§ Recovery ratesThough we stress above the benefits of using the full posterior P(z) and the Bayes Factor B as inputs to downstream cosmological analyses as the best way to avoid biases, in order to evaluate our method, here we present results with the estimated redshift z_ est as the Maximum a Posteriori (MAP) redshift from the P(z) and applying a cut to our sample based on the B, in order to remove spurious fits to noise features.We define N_ band as the number of input sources from the catalogue which have their 21-cm emission line redshifted into the relevant observing band. Using this cut and z_ est we calculate the f_ tot fraction of these sources which have their redshift recovered by each method. We also calculate the number of sources these fractions correspond to when applied to the fiducial 5000^2 continuum cosmology survey:N_ 5k = f_ totN^ cont_ band,where N^ cont_ band is the number of sources in the continuum survey which have 21-cm lines redshifted into the relevant observing band. This N_ 5k is then the number of redshifts a given method may be expected to provide.§.§ Catastrophic outliers From the full posterior P(z) distribution for each source, we calculate a number of metrics for redshift quality, quantifying the distance between z_ est and the true redshift z_ true. Two are `catastrophic outlier fractions':η = N_ out / N_ band,where N_ band is the total number of sources which have lines redshifted into the relevant observing band, and N_ out is the number of these sources with redshift estimates far enough from the true redshift to be classified as outliers.We use two classifications of outliers, with the first given by Table 2 of <cit.> as the number of redshift estimates with:| z_ est - z_ true/1 + z_ true| > 0.15,the outlier fraction for which we refer to as η_ H. The second is given by <cit.> as redshift estimates with:| ln1 + z_ est/1 + z_ true| > 0.2,the outlier fraction for which we refer to as η_ B. Typical catastrophic outlier fractions defined in this way for photometric redshift estimators using optical and near-IR data are ∼ 2 per cent and outlier fractions for our method are shown as a function of the B cut in <ref>. We also show in <ref> the fraction η_3σ, for which N_ out is given by the number of lines for which z_ true is outside of the Credible Interval containing 99 per cent of the posterior probability mass, equivalent to being outside of the 3 σ region of a Gaussian likelihood (the appearance of this line in <ref> is due to low absolute numbers of sources making up the curves). <ref> shows the completeness (detected fraction, right axes) and purity (1 - η, left axes) for the Band 1 and 2 surveys described above as a function of the Bayes factor B used to cut the sample. As can be seen, detection fractions f_ tot, are relatively low at ∼0.25 per cent for Band 1 and ∼10 per cent for Band 2, reflecting the low sensitivity of SKA1-MID to the relatively faint signal in these redshift ranges. We chose ln(B) > 0 as a fiducial cut, and present results below for this sample.We also perform 1000 realisations of Band 1 and 1000 realisations of Band 2 data containing no signal and only noise, and again attempt to fit a six parameter double horn profile to the data. This allows us to quantify the effect of spurious detections when continuum detected galaxies have their 21-cm line redshifted outside of the relevant observing band. We find that, out of the 2000 noise-only realisations simulated across both bands, only 22 (all in Band 2) have a value for the Bayesian evidence favouring the signal model over the noise model, with our fiducial cut of .§.§ Redshift estimatesThe left and right panels of <ref> shows two dimensional histograms of z_ est against z_ true for the ln(B) > 0 sample in SKA1-MID Bands 1 and 2 respectively, with the summary statistics also presented in <ref> and four examples of marginalised P(z) distributions shown in <ref>. Good redshift recovery can be seen for the majority of sources included after thecut, with relatively few outliers. Redshifts are well recovered across Band 2, but are extremely few above z∼1 in Band 1. In <ref> we show one dimensional histograms of the estimated redshifts of the sources recovered by our method with thecut, along with the true redshift distribution of these sources, showing excellent agreement. For reference, we also cut our simulated sample withand SNR_ vel > 10 as performed in <cit.> and <cit.> respectively for their cosmological forecasts and display the redshift range of the resulting sample. Theandsamples share highly similar redshift distributions, even if the exact set of objects selected by the two cuts are not the same, as discussed in <ref>.It should be noted that the <cit.> and <cit.> studies were performed before the re-baselining of SKA1-MID, which resulted in significant changes to the sensitivity curves of both Band 1 and Band 2, meaning they are not directly comparable to the samples presented there <cit.>. These previous studies also allowed line emission to be found for all sources in the sample, without requiring the continuum detection we implement via <ref>, which would require blind line finding in the full spatial and spectral resolution data cubes emanating from the telescope. If continuum selection is used, we estimate data volumes may be decreased from 100TB for a single pointing (3600×3600 spatial pixels covering 1 ^2 over 10000 frequency channels) for the full cube to 0.075TB per pointing for one dimensional, 10000 frequency channel data vectors for all of the 2.7 galaxies per square arcminute. §.§ Relation to SNR definitions <ref> shows the relation between two measures of significance of line redshift detection: both the recovered Bayes factor B and the SNR_ vel definition. Whilst a correlation can be seen between the two measures of detection significance, there are important differences. The two dashed lines mark the fiducial cuts for the two approaches: the vertical line for SNR_ vel > 5 and the horizontal line for ln(B) > 0. These two cuts select two different populations in detail; the upper right quandrant is selected by both methods, the upper left by only the ln(B) > 0 cut, the lower right by only the SNR_vel>5 and the lower left quadrant by neither. We note that with a real line detection technique, an SNR cut even of 10 does not guarantee detection nor preclude detection of galaxies well below the threshold. In <ref> we show two examples of SNR_vel=5 lines with representative noise for the Band 2 survey. Whilst both of these lines have the same SNR_vel, the narrower, taller line is significantly detected, with ln(B) = 4.7 whilst the shorter, broader line is not, having only ln(B) = -2.6. We stress that the results presented here are highly robust, coming from a full simulation of the relevant data and actual application of the detection methods, rather than simply calculating <ref> for the input catalogue and applying a cut, a procedure which will not model the real recovery rate and issues such as outliers and false detections. We therefore believe that the sample selected by ln(B) > 0 is much closer to the set of galaxies with redshifts which will be truly detectable with SKA1-MID. § EXTENSIONS AND IMPROVEMENTS The method we have presented here fits a six parameter model with broad, uninformative priors to the one dimensional data sets considered. However, a less conservative and more constraining approach changing the priors in <ref> to be more informative could be well-motivated. On a catalogue level, the results of previous surveys could be used as priors on e.g. the redshift distribution and luminosity functions of the sources, taking the form of non-uniform priors on v_0 and Ψ^ obs parameters.More informative priors could also be used individually for each source from auxiliary data. In particular, the continuum size, shape, flux, orientation and velocity dispersion are all correlated with, and can be used to form useful priors on, the parameters describing the line profile for the source. For resolved sources, morphology information could also be used; for example in constraining more parameters in the line profiles such as asymmetric heights between the two horn peaks (as may be expected for sources with irregular shapes). Model extensions could also allow for mitigation of Radio Frequency Interference (RFI) contamination of the spectra, for example by allowing Bayesian model comparison between RFI feature models and the emission line profile.Further extensions could also be made to the six parameter model, allowing for inclusion of an un-subtracted continuum flux or more complicated emission line profiles <cit.> using only a small number of extra parameters in the fit.Additionally, in the event of confused sources, Bayesian model selection could be used to determine the optimal number of sources and disentangle the confused signals.The choice of radio continuum sources as the source of position information is not unique. For instance, one could instead choose position priors from surveys at optical wavelengths, such as Euclid or LSST, and perform emission line fits to the data at these locations, potentially providing spectroscopic-like redshifts from the radio data. This has the potential to be extremely effective with the advent of the full SKA (SKA2), which is expected to have up to 10 times the sensitivity of SKA1 – the optimistic scenario for SKA2 in <cit.> containing the same numbers of spectroscopic sources as a Euclid imaging survey up to z∼2.As well as source redshifts, our results can also provide full posterior probability distributions for other parameters which are derived from the shape of the emission line, such as the mass of galaxies – a highly important quantity in studies of the star formation and galaxy evolution across cosmic time. § CONCLUSIONS We have investigated the ability of Bayesian model fitting of 21-cm line emission to provide redshifts for galaxies in radio continuum surveys. We use the continuum detection of a galaxy to provide its sky location and fit a six parameter model to the resulting one dimensional data set.When this redshift estimation method is applied to simulations of SKA1-MID observations and a cut, on the Bayesian evidence ratio between signal and noise models, ofapplied, we find that we may recover redshifts with good confidencefor up to 10.14 per cent of low-redshift objects using observing Band 2, and up to 0.18 per cent of high-redshift objects using observing Band 1. When compared to the previousselection of <cit.> we recover similar objects in terms of their number and redshift distribution, but our selection represents a significant improvement in sophistication: performing full data level simulations and attempting to recover the line profiles, rather than simply calculating an estimated SNR from simulated intrinsic source properties. Our method also has the significant advantage of providing a full P(z) and detection significance B for each source, which may be coherently folded in to cosmological parameter estimation analyses and other analyses such as estimation of galaxy formation histories via the mass function.This allows a firm quantification to be made on the numbers of redshifts which will be available for continuum selected sources from SKA1-MID using only the radio data, giving insight into the extent of the reliance on cross-matched catalogues in other wavebands to obtain source redshifts. This should place current SKA cosmology forecasts on firmer footing and potentially allow for improvement. The performance of our redshift estimator could also inform the design of SKA data processing strategies, potentially allowing larger bandwidths and higher frequency resolutions, since the use of a continuum prior on source location obviates the need for blind finding of sources in extremely large three dimensional (spatial and spectral) data cubes.Here we have only considered the improvement for the first phase of the SKA (SKA1) but the full SKA (SKA2) should present even more opportunities. With the correct bandwidth, large and fast surveys with SKA2 could potentially become a redshift machine providing posterior P(z) for Euclid and LSST sources (the continuum prior information need not come from a radio survey), circumventing many of the systematics of photometric redshifts which may otherwise limit their cosmological constraints.§ AUTHOR CONTRIBUTIONSIH was responsible for the initial conception of the project. IH and ML designed the study, developed the methodology, performed the analysis, and wrote the manuscript. ML wrote the analysis code and ran the simulations. MLB contributed to the definition of the galaxy samples and to the initial development of the simulations. § ACKNOWLEDGMENTSWe would like to thank Adam Avison, Bruce Bassett, Anna Bonaldi, Phil Bull, Stefano Camera, Keith Grainge, Mario Santos and Joe Zuntz for helpful discussions and comments on the draft. We also thank Robert Braun and Phil Bull for providing us with the SKA sensitivity curves and Hiranya Peiris for allowing access to the Hypatia cluster at UCL. IH is supported by an ERC Starting Grant (grant no. 280127). ML acknowledges support from the SKA, NRF and AIMS. This work is partially supported by the European Research Council under the European Community's Seventh Framework Programme (FP7/2007-2013)/ERC grant agreement no 306478-CosmicDawn.31[Allison et al.(2012a)Allison, Curran, Emonts et al.]2012MNRAS.423.2601A Allison, J. R., Curran, S. J., Emonts, B. H. C., et al., 2012a, , 423, 2601, arXiv:1204.1391[Allison et al.(2012b)Allison, Sadler & Whiting]2012PASA...29..221A Allison, J. R., Sadler, E. M., Whiting, M. T., 2012b, , 29, 221, arXiv:1109.3539[Bernstein & Huterer(2010)]2010MNRAS.401.1399B Bernstein, G., Huterer, D., 2010, , 401, 1399, arXiv:0902.2782[Bonaldi et al.(2016)Bonaldi, Harrison, Camera & Brown]2016arXiv160103948B Bonaldi, A., Harrison, I., Camera, S., Brown, M. L., 2016, ArXiv e-prints, arXiv:1601.03948[Bull(2016)]2016ApJ...817...26B Bull, P., 2016, , 817, 26, arXiv:1509.07562[Bull et al.(2015)Bull, Ferreira, Patel & Santos]2015ApJ...803...21B Bull, P., Ferreira, P. G., Patel, P., Santos, M. G., 2015, , 803, 21, arXiv:1405.1452[Camera et al.(2015)Camera, Santos & Maartens]2015MNRAS.448.1035C Camera, S., Santos, M. G., Maartens, R., 2015, , 448, 1035, arXiv:1409.8286[Demetroullas & Brown(2016)]2016MNRAS.456.3100D Demetroullas, C., Brown, M. L., 2016, , 456, 3100, arXiv:1507.05977[Dewdney(2016)]skabaseline Dewdney, P., 2016, SKA1 System Baseline Design V2, Tech. Rep. SKA-TEL-SKO-0000002, SKAO[Duffy et al.(2012)Duffy, Meyer, Staveley-Smith et al.]2012MNRAS.426.3385D Duffy, A. R., Meyer, M. J., Staveley-Smith, L., et al., 2012, , 426, 3385, arXiv:1208.5592[Feroz et al.(2009)Feroz, Hobson & Bridges]2009MNRAS.398.1601F Feroz, F., Hobson, M. P., Bridges, M., 2009, , 398, 1601, arXiv:0809.3437[Feroz et al.(2013)Feroz, Hobson, Cameron & Pettitt]2013arXiv1306.2144F Feroz, F., Hobson, M. P., Cameron, E., Pettitt, A. N., 2013, ArXiv e-prints, arXiv:1306.2144[Harrison et al.(2016)Harrison, Camera, Zuntz & Brown]2016arXiv160103947H Harrison, I., Camera, S., Zuntz, J., Brown, M. L., 2016, ArXiv e-prints, arXiv:1601.03947[Hastings(1970)]Hastings1970 Hastings, W. K., 1970, Biometrika, 57, 1, 97[Holwerda et al.(2012)Holwerda, Blyth & Baker]2012IAUS..284..496H Holwerda, B. W., Blyth, S.-L., Baker, A. J., 2012, in The Spectral Energy Distribution of Galaxies - SED 2011, edited by Tuffs, R. J., Popescu, C. C., vol. 284 of IAU Symposium, 496–499, arXiv:1109.5605[Hoyle et al.(2015)Hoyle, Rau, Zitlau, Seitz & Weller]2015MNRAS.449.1275H Hoyle, B., Rau, M. M., Zitlau, R., Seitz, S., Weller, J., 2015, , 449, 1275, arXiv:1410.4696[Jeffreys(1998)]jeffreys Jeffreys, H., 1998, The Theory of Probability, OUP Oxford[Metropolis et al.(1953)Metropolis, Rosenbluth, Rosenbluth, Teller & Teller]Metropolis1953 Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H., Teller, E., 1953, The Journal of Chemical Physics, 21, 6, 1087[Obreschkow et al.(2009)Obreschkow, Klöckner, Heywood, Levrier & Rawlings]2009ApJ...703.1890O Obreschkow, D., Klöckner, H.-R., Heywood, I., Levrier, F., Rawlings, S., 2009, , 703, 1890, arXiv:0908.0983[Patel et al.(2010)Patel, Bacon, Beswick, Muxlow & Hoyle]2010MNRAS.401.2572P Patel, P., Bacon, D. J., Beswick, R. J., Muxlow, T. W. B., Hoyle, B., 2010, , 401, 2572, arXiv:0907.5156[Santos et al.(2015)Santos, Alonso, Bull, Silva & Yahya]2015aska.confE..21S Santos, M., Alonso, D., Bull, P., Silva, M. B., Yahya, S., 2015, Advancing Astrophysics with the Square Kilometre Array (AASKA14), 21, arXiv:1501.03990[Serra et al.(2015)Serra, Westmeier, Giese et al.]2015MNRAS.448.1922S Serra, P., Westmeier, T., Giese, N., et al., 2015, , 448, 1922, arXiv:1501.03906[Skilling(2006)]skilling2006 Skilling, J., 2006, Bayesian Anal., 1, 4, 833[Smolcic et al.(2017)Smolcic, Delvecchio, Zamorani et al.]2017arXiv170309719S Smolcic, V., Delvecchio, I., Zamorani, G., et al., 2017, ArXiv e-prints, arXiv:1703.09719[Springel et al.(2005)Springel, White, Jenkins et al.]2005Natur.435..629S Springel, V., White, S. D. M., Jenkins, A., et al., 2005, , 435, 629, arXiv:astro-ph/0504097[Thompson et al.(1986)Thompson, Moran & Swenson]1986isra.book.....T Thompson, A. R., Moran, J. M., Swenson, G. W., 1986, Interferometry and synthesis in radio astronomy[Trotta(2008)]trotta Trotta, R., 2008, Contemporary Physics, 49, 71, arXiv:0803.4089[Tunbridge et al.(2016)Tunbridge, Harrison & Brown]2016MNRAS.463.3339T Tunbridge, B., Harrison, I., Brown, M. L., 2016, , 463, 3339, arXiv:1607.02875[Westmeier et al.(2014)Westmeier, Jurek, Obreschkow, Koribalski & Staveley-Smith]2014MNRAS.438.1176W Westmeier, T., Jurek, R., Obreschkow, D., Koribalski, B. S., Staveley-Smith, L., 2014, , 438, 1176, arXiv:1311.5308[Wilman et al.(2008)Wilman, Miller, Jarvis et al.]wilman08 Wilman, R. J., Miller, L., Jarvis, M. J., et al., 2008, , 388, 1335, arXiv:0805.3413[Yahya et al.(2015)Yahya, Bull, Santos et al.]2015MNRAS.450.2251Y Yahya, S., Bull, P., Santos, M. G., et al., 2015, , 450, 2251, arXiv:1412.4700§ SKA1-MID NOISE PROFILES The mid-frequency dish array of phase 1 of the Square Kilometre Array (SKA1-MID) is currently still under design, meaning the exact noise properties of the telescope are still somewhat uncertain. For this work we make use of the most recent noise curves publicly available <cit.>, on the understanding they are likely to be representative of the true performance. We model the receiver temperatures (in Kelvin) for SKA1-MID Band 1 and Band 2 with frequency (ν in GHz) dependencies:T^ B1_ rcv = 11 + 3(ν - 0.35/1.05 - 0.35)and:T^ B2_ rcv = 8.2 + 0.7(ν - 0.95/1.75 - 0.95).To calculate the system temperature we also use the sky temperature:T_ sky =20 ( 0.408/ν)^2.75 + 2.73 + 288[ 0.005 + 0.1314 exp(8 × 10^ν - 22.23) ],spillover temperature:T_ spl = 4and the ground temperature:T_ gnd = 300,with T_ sys = T_ rcv + T_ sky + T_ spl. We also calculate the antenna efficiency as a function of frequency, from the dish diameter D_ dsh:η = η_0 - 70( c/D_ dshν× 10^9)^2 - 0.36( | ν - 1.6 |/24 - 1.6)^0.6giving the effective area for the combined number of N_ ant dishes:A_ eff = N_ antηπ 0.25 D_ dsh^2and the resultant System Equivalent Flux Density:SEFD = 2k_ BT_ sys/A_ eff.<ref> shows the resultant A_ eff / T_ sys from this recipe, with N_ ant = 190 and D_ dsh = 15m for the two SKA1-MID observing bands considered in the paper. These may then be used to calculate a noise rms:S_ rms = 260μJy(T_ sys/20K) (25,000m^2/A_ eff) (0.01MHz/δν)^1/2(1h/t_ p)^1/2where δν is the frequency channel width and t_ p the pointing time (which we assume to be 1.76 hours as in ).
http://arxiv.org/abs/1704.08278v1
{ "authors": [ "Ian Harrison", "Michelle Lochner", "Michael L. Brown" ], "categories": [ "astro-ph.CO", "astro-ph.IM" ], "primary_category": "astro-ph.CO", "published": "20170426181836", "title": "Redshifts for galaxies in radio continuum surveys from Bayesian model fitting of HI 21-cm lines" }
=1theoremTheorem lemmaLemma propositionProposition claimClaim definitionDefinition corollaryCorollary
http://arxiv.org/abs/1704.08717v3
{ "authors": [ "Xenia de la Ossa", "Magdalena Larfors", "Eirik E. Svanes" ], "categories": [ "hep-th", "math.DG" ], "primary_category": "hep-th", "published": "20170427185401", "title": "The infinitesimal moduli space of heterotic $G_2$ systems" }
( [#1 =2em=6.12in =4em=1#11Dynamo in Shearing Box]Magnetorotational Dynamo Action in the Shearing BoxJ. Walker & S. Boldyrev] Justin Walker^1E-mail: [email protected] and Stanislav Boldyrev^1,2 ^1Department of Physics, University of Wisconsin-Madison, 1150 University Avenue, Madison, WI 53706, USA ^2Space Science Institute, Boulder, Colorado 80301, USA 2017[ [ December 30, 2023 ===================== Magnetic dynamo action caused by the magnetorotational instability is studied in the shearing-box approximation with no imposed net magnetic flux. Consistent with recent studies, the dynamo action is found to be sensitive to the aspect ratio of the box: it is much easier to obtain in tall boxes (stretched in the direction normal to the disk plane) than in long boxes (stretched in the radial direction). Our direct numerical simulations indicate that the dynamo is possible in both cases, given a large enough magnetic Reynolds number. To explain the relatively larger effort required to obtain the dynamo action in a long box, we propose that the turbulent eddies caused by the instability most efficiently fold and mix the magnetic field lines in the radial direction. As a result, in the long box the scale of the generatedstrong azimuthal (stream-wise directed) magnetic field is always comparable to the scale of the turbulent eddies. In contrast, in the tall box the azimuthal magnetic flux spreads in the vertical direction over a distance exceeding the scale of the turbulent eddies. As a result, different vertical sections of the tall box are permeated by large-scale nonzero azimuthal magnetic fluxes, facilitating the instability. In agreement with this picture, the cases when the dynamo is efficient are characterized by a strong intermittency of the local azimuthal magnetic fluxes.MHD – plasmas – accretion discs – dynamo § INTRODUCTIONMagnetorotational instability (MRI), the instability of a differentially rotating, conducting fluid, permeated by a weak magnetic field <cit.>, is believed to be the leading mechanism that renders astrophysical accretion flows turbulent and allows them to efficiently lose their angular momentum <cit.>. The local properties of the instability and the resulting turbulence may be studied in the shearing-box approximation, which models a small box in the midplane of a disc so that the large-scale effects such asstratification and stream-line curvature can be neglected <cit.>. The dynamics studied in the shearing box are therefore expected to be universal, that is, applicable to other astrophysical and laboratory systems possessing rotating, shearing flows <cit.>.The properties of magnetic turbulence driven by the MRI in the shearing box are highly nontrivial. In order to discuss them we use the following standard orthogonal coordinate frame. Assuming that the box is positioned inside a disc, we chose the vertical coordinate (z) along the direction of the angular velocity, the coordinate (x) along the radial direction, and the azimuthal coordinate (y) along the unperturbed rotating flow. When the box is permeated by a nonzero magnetic flux in the vertical direction, the linear instability develops, eventually leading to a strongly nonlinear steady-state turbulent regime. Although the resulting turbulence is still not fully understood, it has been recently established that it resembles the standard magnetohydrodynamic turbulence <cit.>. The azimuthal magnetic field is concentrated at large scales and it plays the role of the guide field for small-scale fluctuations, whose energy spectrum and other statistical characteristic are close to that of homogeneous Alfvénic MHDturbulence <cit.>. A similar result has subsequently been obtained using hybrid (kinetic ions) plasma simulations <cit.>, which also found that sub-proton-scale MRI turbulence is consistent with the kinetic-Alfvén turbulence. When the net vertical magnetic flux is zero, strong transient amplification is still possible if the net azimuthal magnetic flux is nonzero <cit.>.In this work we concentrate on the case when the net magnetic flux through any cross-section of the box is zero. In this zero-net-flux case, a linear instability is not possible, but the flow may become unstable for a finite initial perturbation (subcritical instability) <cit.>. Our recent high-resolution numerical simulations <cit.> concentrated on the case of unit magnetic Prandtl number(Pm) and used the “long" shearing box (L_x:L_y:L_z=2:4:1). They did not find dynamo action for the magnetic Reynolds numbers several times larger than those required for the dynamo action in a non-shearing, non-rotating box. This observation cast doubt on the existence of MRI dynamo action for Pm≤ 1, in a stark contrast with the non-shearing, non-rotating case where the dynamo action is expected to exist for any given Pm as long as the magnetic Reynolds number is large enough <cit.>. A different outcome of the shearing-box dynamo simulations was however reported in <cit.> who used the “tall" shearing boxes, that is, boxes that vertical sizes exceed the radial sizes. Quite interestingly, they observed that the critical Prandtl number for dynamo action[We use the term “dynamo action” to refer to the self-sustained turbulence that results from the subcritical MRI instability that, in turn, reinforces the magnetic field.] is reduced as long as L_z/L_x≳ 2.5.In order to understand the different outcomes of these studies, we have performed a series of direct numerical simulations of incompressible MHD dynamo action for varying aspect ratios of the shearing box. We found that, in agreement with <cit.> and <cit.>, the dynamo action is sensitive to the aspect ratio of the box. In our case of Pm=1, tall boxes (stretched in the z-direction) rapidly exhibit dynamo action. To explain these results we propose that the turbulent eddies caused by the dynamo, efficiently fold the magnetic field lines in the radial (x) direction. As a consequence, inthe long box the x-scale of the generatedB_y component of the magnetic field is always comparable to the scale of the turbulent eddies. In contrast, in the tall box the flux of B_y can spread in the vertical direction over the distances exceeding the scales of the turbulent eddies, which are constrained by the short x dimension of the box. The vertical mixing of the B_y field is thus suppressed in tall boxes. As a result, different vertical sections of the tall box are permeated by large-scale nonzero fluxes of the azimuthal field B_y, leading to the instability.This phenomenological picture motivated us to review one of the results from our previous work <cit.>. In this work, the dynamo action was not observed in the long box (L_x:L_y:L_z=2:4:1) for the long time during which we integrated the solution. The initial large-scale fluctuations kept decaying for more than 200 shearing times. During this decay, the scale of the turbulence kept decreasing as well, as to maintain the balance between the linear and turbulent shear rates. For the large Reynolds number that we used, the turbulence would decay to progressively smaller scales. In this case, however, the scale of the turbulent fluctuations should eventually become sufficiently smaller than the vertical extent of the box L_z, so that according to our phenomenological picture, the dynamo should become possible. To check this hypothesis, we significantly extended the running time of simulations of <cit.>, and after about 600 shearing times did observe the possible onset of the dynamo action. This may reconcile the available numerical results. Based on our findings, we suggest that the dynamo action is always possible, no matter what the aspect ratio of the box is. However, the long boxes require significantly larger Reynolds number and (assuming large scale of the initial fluctuations) significantly longer running time in order to observe the dynamo action. We have also established that the dynamo action leads to a quite intermittent distribution of the azimuthal magnetic fluxes, in agreement with the proposed phenomenological picture.§ NUMERICAL SETUPThe shearing-box approximation captures the local small-scale dynamics of an accretion disc <cit.>. A small box centered at a radial distance r_0 and orbiting with the angular velocity Ω(r) has a shearing rate q ≡ -( d lnΩ / dln r)_r_0. For the Keplerian orbital flow, we have q=3/2, and we denote Ω_0=Ω(r_0). At the center of the box r_0 the gravitational force balances the centrifugal force. Assuming an incompressible flow, one writes the fluid momentum equation and the induction equation for the magnetic field in the vicinity of r_0, a detailed discussion of the derivation is given in <cit.>. The local orthogonal coordinate frame in the box is chosen such that the direction of z coincides with the direction of Ω_0, x with the radial direction, and y with the orbiting direction of the flow. The equilibrium background flow in this box then corresponds to the constant linear shear v⃗_0(x)=-qΩ_0 xŷ. By representing the velocity field as u⃗ = v⃗ + v⃗_0, we formally eliminate the background shear. The shearing-box equations for the velocity and magnetic fields then take the form:[Because the mean field is zero in the setups we study here, we will refer only to the fluctuating part of the field b⃗.] ∂_t v⃗= qΩ_0 x∂_yv⃗ - (v⃗·∇⃗)v⃗ -∇ P + b⃗·∇b⃗ + νv⃗- -2Ω⃗_0 ×v⃗ + qΩ_0 v_x , ∂_t b⃗ =qΩ_0 x∂_yb⃗+ ∇⃗× (v⃗×b⃗) + ηb⃗ - qΩ_0 b_x ,In this system, the magnetic field is measured in the Alfvénic units, v_A = b/√(4πρ_0), the density ρ_0 is constant, and we use the dimensionless variables: the time is normalized to t_0 = Ω_0^-1, the spatial variables to L^*, and the velocity to Ω_0 L^*, where L^*≡min(L_x,L_y,L_z). The dimensionless pressure P is not an independent field, but rather is defined as to ensure the incompressibility of the flow. In our numerical simulations we chose Pm= ν/η = 1.We will solve Equations (<ref>)-(<ref>) in a rectangular box (L_x, L_y, L_z), assuming periodic boundary conditions in the z- and y-directions, and shear-periodic boundary conditions in the x-direction <cit.>. It is useful to discuss the conservation laws of Equations (<ref>)-(<ref>). In the non-rotating system, the MHD equations conserve the quadratic integrals of energy, cross helicity, and magnetic helicity if the external energy supply and energy dissipation are absent <cit.>. In the MRI case, energy is injected into the system by the instability. From Eqs. <ref> and <ref> one then derives for the energy <cit.>:d/dt< v^2/2 +b^2/2>=α̃-ν⟨ω⃗^2 ⟩ -η⟨j⃗^2 ⟩ ,where α̃= qΩ_0 ⟨ v_xv_y - b_xb_y ⟩ is the energy injection rate, and the angular brackets denote an average over the box. The conservation law of cross helicity has to be modified in the shearing box as:d/dt⟨v⃗·b⃗+(2-q)Ω⃗_0·A⃗⟩= -(ν +η)⟨ω⃗·j⃗⟩,and the conservation law for magnetic helicity remains unchanged:d/dt⟨b⃗·A⃗⟩=-2η⟨j⃗^2⟩.In these equations we introduce the vorticityω⃗=∇×v⃗, the current density j⃗=∇×b⃗, and the vector potential A⃗, where b⃗=∇×A⃗. In the presence of the magnetorotational instability and in the absence of dissipation, the energy grows while the cross helicity and magnetic helicity do not.The system (<ref>)-(<ref>) is solved using the pseudo-spectral, incompressible version of the code Snoopy <cit.>. Snoopy includes physical dissipation, while it uses a Fourier transform (FFTW 3 library) and a low-storage third-order Runge-Kutta (RK3) scheme for time evolution of all the fields. To employ the shear-periodic boundary conditions, Snoopy remaps the Fourier components of the fields periodically every Δ t_remap=|L_y/(q Ω_0 L_x)| (see the mathematical details of the procedure in  <cit.>).In our work, we consider three simulation boxes with the following dimensions (L_x, L_y, L_z) ∈{(1,4,4), (1,4,16), (2,4,1)}. The numerical resolution in the first two cases is 128 points per unit length of L_x and L_z, and 64 points per unit length of L_y. In the third case, there are 512 points per unit length of L_x and L_z, and 256 points per unit length of L_y. Thus, the first and last cases have the same number of points in the vertical direction and are comparable after rescaling the unit of length. The first two cases have a viscosity of ν = 1/5000. The third case has a viscosity of ν = 1/45000. A summary of the cases studied is given in Table <ref>. Note that the energy and transport parameter reported are volume densities of these quantities. The transport parameter is defined in the standard way: α = ⟨ v_xv_y - b_xb_y ⟩/(qΩ_0 L_z)^2. The values shown in the table are averages over some time intervals in the steady states (cases I, II, and IIIb), and over a short time interval in the decaying case IIIa, where the energy is approximately constant. Cases I and II were initialized with B⃗_0 = (0,0,B_0cos(2π x/L_x)) and a large-scale, white-noise velocity configuration, and then evolved until a steady state is reached. Case III is the continuation of the zero-net-flux simulations discussed in our paper <cit.>. § RESULTSConsistent with previous simulations <cit.>, we find that our boxes with the aspect ratio L_z/L_x≥4(cases I and II) are turbulent. They rapidly reach a steady state and maintain it for the extent of the simulation, while E and α fluctuate around a mean value. Case III, which is a continuation of <cit.>, exhibits very slowly decaying turbulence until t≈ 600, and appears to reach a steady state at later times (see Fig. <ref>).In an attempt to explain these results, we find it useful to study the behavior of the azimuthal magnetic field b_y. As was noted in our previous work <cit.>, this field is concentrated at relatively large scales, and plays the role of the guide field for the remaining fluctuations. We found, however, an important difference in the distribution of this field in tall and long boxes. The net flux of the b_y field is zero in the dynamo case. Given some initial perturbation of the magnetic field, the instability sets in, which leads to stretching and folding of the magnetic field lines in the x-y direction. The shearing flow then increases the strength of the b_y field. An important observation, however, is that the x-scale of the resulting folds of the b_y field is always on the x-scale of the fluctuating velocity field. In the box extended in the x-direction (case III), this means that both the b_y and the v_x fields are still concentrated at comparable scales. This is seen in the spectra shown in Fig. <ref>.The situation is qualitatively different in a box extended in the vertical direction (case II). It turns out that in this case the flux of b_y can spread in the vertical direction over the scales much larger than the scales of the velocity field. This is seen in Fig. <ref>, where the v_z component of the velocity field is concentrated at the scale comparable to the small horizontal box size L_x, while the b_y magnetic field is concentrated at the larger scale L_z. The structure of the b_y field is shown in Fig. <ref>, which also reveals separation of the regions of positive and negative azimuthal magnetic flux in the vertical direction. This is in contrast to Fig. <ref> where b_y is more homogeneous, due to large-scale turbulent mixing. Fig. <ref> shows case I, an intermediate between cases II and III.In order to quantify this flux intermittency and to compare it with the other cases, we subdivided y=const cross-sections of the simulation boxes into small 1/16× 1/16 squares and calculated azimuthal magnetic fluxes for each of these squares. We then plotted histograms of the obtained fluxes Φ_y. The results are shown in Fig. <ref>. The flux intermittency is smallest in case IIIa, larger in case I, and the largest in the tallest box of case II. The increased flux intermittency implies more favorable conditions for the dynamo action.We are now in a position to answer why our case III did not exhibit dynamo action until the turbulence decayed to very small amplitudes (see Fig. <ref>). The matter is that when the energy of fluctuations declines, so does the scale of the turbulence (see Fig. <ref>). In our work <cit.> this is explained by the necessity to maintain the balance between the linear and nonlinear shearing rates of the turbulence. When the x-scale of the turbulence was relatively large compared to the box size L_z, the dynamo did not operate. When the x-scale of the turbulence decreased as to become smaller than the L_z size of the box, in analogy with cases II and I the dynamo action became possible. Consistent with our discussion above, the intermittency of the magnetic flux also increased in this regime (see Fig. <ref>).Although the dynamo action seems to operate in case IIIb, we should exercise caution in claiming that a true steady state is observed. As pointed out in <cit.> it may be possible in such a case that the system is described by a supertransient state and may ultimately decay if integrated long enough [Although, with Rm=45000, the expected lifetime is 𝒪(10^32).]. Also, unlike the other dynamo cases in this study, IIIb has developed, in addition to the small-scale fluctuations, a large-scale v_y(x) zonal flow resulting from geostrophic balance <cit.> (see Fig. <ref>).Comparing the spectra of case III from early times in Fig. <ref> with those from late times in Fig. <ref>, we see that this zonal flow is long-lived, or at least steadily reinforced.The possibility of such a flow may be related to the long radial extent of the box.This flow however is practically uncorrelated with the small-scale fluctuations; for instance, it does not contribute to the transport coefficient α_v in Fig. (<ref>).§ CONCLUSIONS We have found in incompressible MHD simulations of the shearing box that the properties of the zero-net-flux dynamo action depend intimately on the relation between the scale of the turbulence and the vertical size of the box. For a given set of physical parameters, transport is sustained and a dynamo action rapidly sets in in systems with high L_z/L_x aspect ratio, while in the opposite case of small aspect ratio, the dynamo action is not observed until the scale of the turbulence significantly decreases. This is the despite the size of the box being much greater than the dissipative scale of turbulence in both cases. Based on our results we suggest the following explanation for this phenomenon. We propose that an inherent property of the shearing-box turbulence is its tendency to build up local regions of nonzero b_y flux,separated in the vertical direction z. This may be related to the fact that the configurations b⃗=(0, b_y(z),0), v⃗=(v_x(z,t), v_y(z,t), 0) are exact solutions of the ideal equations <cit.>. Those sub-regions with nonzero b_y fluxes are MRI-unstable. The resulting turbulent eddies, on the other hand, are trying to mix and homogenize the b_y field. The size of the turbulent eddies is alwayscomparable to the size of the folds of the b_y field in the x-direction. Indeed, such eddies are caused by the MRI instability of the b_y field. In “long” boxes, the eddies caused by an initially unstable large-scale perturbation have a typical length-scale larger than the vertical extent of the box, and they are able to effectively mix the field in that direction, thus reducing the b_y fluxes and effectively reducing the rate of the instability as compared to the rate of turbulent energy dissipation. The resulting turbulence will therefore decay, until the scale of the turbulence becomes smaller than the vertical size of the box, at which point the vertical flux separation enhances the efficiency of the instability. In “tall” boxes, on the other hand, the eddies are always smaller than the vertical extension of the box. This allows oppositely-directed b_y fluxes to build up in vertically separated regions, allowing the instability to win. This poses a broader question to what extent the shearing-box results describe the natural MRI turbulence. The shearing-box model suggests that the scale of the MRI-dynamo-driven turbulence should be smaller than the thickness of the disc and that the resulting transport coefficients are very small. We, however, notice that the vertical limitations in real accretion disks are never as severeas the periodic boundary conditions of the shearing-box simulations. Allowing the flux of the b_y magnetic field to partially escape the box in the vertical direction, for instance, will facilitate the dynamo action rather than impede it. This indicates that the shearing-box model with periodic boundary conditions may not allow one, at least at the present level of understanding, to realistically simulate the scale and the transport properties of the MRI-dynamo-driven turbulence at low magnetic Prandtl numbers. What seems to be a robust feature of the MRI-dynamo-driven turbulence, however, is its tendency to separate the regions of positive and negative b_y fluxes, leading to highly intermittent azimuthal magnetic flux patches. In a stratified disk, these magnetic flux tubes can then buoy above the surface, forming an intermittently magnetized corona <cit.>. Non-periodic large-scale conditions (e.g., stratification, flux escape, etc.) need to be incorporated in the shearing-box model in order to provide a more realistic description. § ACKNOWLEDGMENTSWe thank Fausto Cattaneo and Geoffroy Lesur for useful comments. JW is supported by the Wisconsin Alumni Research Foundation at the University of Wisconsin-Madison. SB is partly supported by the National Science Foundation under the grant NSF AGS-1261659 and by the Vilas Associates Award from the University of Wisconsin-Madison. Simulations were performed at the Texas Advanced Computing Center (TACC) at the University of Texas at Austin under the NSF-Teragrid Project TG-PHY110016.mnras
http://arxiv.org/abs/1704.08636v1
{ "authors": [ "Justin Walker", "Stanislav Boldyrev" ], "categories": [ "astro-ph.SR", "astro-ph.HE", "physics.flu-dyn", "physics.plasm-ph", "physics.space-ph" ], "primary_category": "astro-ph.SR", "published": "20170427160227", "title": "Magnetorotational Dynamo Action in the Shearing Box" }
^1 Centre of Low Temperature Physics, Institute of Experimental Physics, Slovak Academy of Sciences, and P. J. Šafárik University, SK-04001 Košice, Slovakia^2 Univ. Grenoble Alpes, Inst. NEEL, F-38042 Grenoble, France ^3 SPSMS, UMR-E9001, CEA-INAC/UJF-Grenoble 1, 17 Rue des martyrs, 38054 Grenoble, France^4 Institute of Electrical Engineering, Slovak Academy of Sciences, Dúbravská cesta 9, 84104 Bratislava, Slovakia^5 Department of Physics, Drexel University, 3141 Chestnut St., Philadelphia, PA 19104, USA^6 IFW Dresden, P.O. Box 270116, 01171 Dresden, Germany^7 Helmholtz-Zentrum Berlin für Materialien und Energie, Albert-Einstein-Strasse 15, D-12489 Berlin, Germany^8 Institute of Physics, Ecole Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, SwitzerlandWe present a detailed study of the phase diagram of copper intercalated TiSe_2 single crystals, combining local Hall-probe magnetometry, tunnel diode oscillator technique (TDO), and specific heat and angle-resolved photoemission spectroscopy measurements. A series of the Cu_xTiSe_2 samples from three different sources with various copper content x and superconducting critical temperatures T_c have been investigated. We first show that the vortex penetration mechanism is dominated by geometrical barriers enabling a precise determination of the lower critical field, H_c1. We then show that the temperature dependence of the superfluid density deduced from magnetic measurements (both H_c1 and TDO techniques) clearly suggests the existence of a small energy gap in the system, with acoupling strength 2Δ_s ∼ [2.4-2.8]k_BT_c, regardless of the copper content, in puzzling contradiction with specific heat measurements which can be well described by one single large gap 2Δ_l ∼ [3.7-3.9]k_BT_c. Finally, our measurements reveal a non-trivial doping dependence of the condensation energy, which remains to be understood.Magnetic and thermodynamic properties of Cu_xTiSe_2 single crystalsZ. Pribulová,^1 Z. Medvecká,^1 J. Kačmarčík,^1V. Komanický,^1 T. Klein,^2, P. Rodière,^2 F. Levy-Bertrand,^2 B. Michon,^2 C. Marcenat,^3 P. Husaníková,^4 V. Cambel,^4 J. Šoltýs,^4 G. Karapetrov,^5 S. Borisenko,^6 D. Evtushinsky,^7 H. Berger,^8 and P. Samuely^1 December 30, 2023 ===========================================================================================================================================================================================================================================================================§ INTRODUCTIONThe discovery of a charge ordered phase in underdoped cuprates <cit.> recently invigorated the debate on the origin of the coupling mechanism in high T_c superconductors, which remains one of the major unsolved questions in solid state physics. As in many unconventional systems, the superconducting state develops in the vicinity of other electronic and/or magnetic instabilities and the interplay between superconductivity and those competing phases remains unclear. Dichalcogenides are then particularly interesting as they offer the opportunity to study this interplay in a much simpler system. Indeed, no competing magnetic instability develops in those systems but superconductivity still coexists with a charge density wave (CDW) instability. This interplay has first been studied into detail in 2H-NbSe_2 <cit.> and, more recently, 1T-TiSe_2 became the focus of considerable interest as a (commensurate) CDW driven by an exciton-phonon mechanism <cit.> develops below ∼ 200 K. This CDW is progressively suppressed upon Cu intercalation and recent x-ray diffraction measurements <cit.> suggested that domain walls - associated with some (slight) incommensuration - appear for doping content over which a superconducting dome develops <cit.>. The influence of those domain walls remains to be understood, but the concomitant onset of superconductivity and incommensurability suggests that they may play a role in the formation of the superconducting state.Moreover, despite its simple electronic structure <cit.>, the nature of the superconducting gap(s) remains unclear in Cu_xTiSe_2. On one hand, thermal conductivity measurements <cit.> suggested that this system is a conventional single-gap s-wave superconductor, in agreement with our recent specific-heat measurements <cit.>. On the other hand, μSR measurements <cit.> displayed an anomalous temperature dependence of the London penetration depth indicating the presence of two superconducting gaps in underdoped Cu_xTiSe_2 where coexistence between CDW and superconductivity was anticipated. Recently, our local magnetic measurements revealed the existence of an unexpected transverse Meissner effect, clearly showing that vortices remain locked along the ab-planes in tilted magnetic fields <cit.>, hence indicating the presence of an unexpected - and still unexplained - strong modulation of the pinning energy along the c-direction, which might be related to a modulation of the gap/order parameter. In order to shed light on the nature of the superconducting properties, we performed a detailed study of the phase diagram of copper-intercalated TiSe_2 single crystals, combining local Hall-probe magnetometry (HPM), tunnel diode oscillator (TDO) technique and specific-heat measurements. We present a quantitative analysis of both the temperature and doping dependencies of the critical fields (H_c1 and H_c2), and hence of the corresponding penetration depth, λ and coherence length, ξ as well as the doping dependence of the superconducting gap(s). All the measurements demonstrate very good quality of the single crystals which all display well defined specific heat anomalies and very small pinning. We show that the vortex penetration mechanism is dominated by geometrical barriers which enables a reliable determination of H_c1. Those measurements, however, revealed a puzzling discrepancy between thermodynamic and magnetic properties. Indeed, whereas the former indicate the presence of one single large gap 2Δ_l ∼ [3.7-3.9]k_BT_c, the temperature dependence of the superfluid density deduced from magnetic measurements (both HPM and TDO) is driven by a small gap 2Δ_s ∼ [2.4-2.8]k_BT_c at low temperatures. Finally, we show that thecondensation energy density calculated extracting λ from H_c1 and ξ from H_c2 measurements is consistent with previous measurements of the heat capacity; however, its temperature dependence is found to be nontrivial.§ SAMPLE PREPARATION AND EXPERIMENTS Single crystals were prepared via the iodine gas transport method <cit.>. Energy dispersive x-ray spectroscopy (EDS) analysis was performed to determine the copper content in the samples. The critical temperature T_c of each sample, determined from the specific heat measurements, is displayed in the inset of Fig.1 together with the overall phase diagram previously suggestedby Morosan et al. <cit.>.Samples A, B, C and D were grown in Karapetrov' s group, samples E and F by Berger and sample G by Levy-Bertrand and Michon. This latter sample is optimally doped, with the highest critical temperature, samples B, C and D are underdoped, whilesamples A, E, and F are overdoped. The large collection of crystals hence made it possible to study the superconducting properties over a large part of the superconducting dome. The characteristic parameters deduced from our work have been summarized in Table I. Angle-resolved photoemission spectroscopy (ARPES) measurements were performed using "1-cubed" station at BESSY (Berliner Elektronenspeicherring-Gesellschaft für Synchrotronstrahlung) on a sample from the series grown by Berger. The doping level of the inspected sample, determined from the Fermi surface area is 0.07. From the measurements we infer the band dispersion and the Fermi surface shape (Fig.2). The Fermi surface consists of the approximately elliptic electron-like sheets centered around the M points. The observed ratio of the elliptical axes is about 2.5, and the depth of the band is about 120 meV. The measurements were performed with photon energy of 80 eV (Fig.2b) and 110 eV (Fig.2c). In both cases the sample temperature was 7 K. The orientation of the analyzer slit is given in Fig. 2(a).Although the TiSe_2is a layered compound, the Fermi surface, according to the bands structure calculations,is substantially three-dimensional. In ARPES measurements the large degree of threedimensionality is confirmed by the smearing observed in the spectra. Both from theory and from experiment we estimate that the observed (maximal) depth of the band and size of the Fermi surface are effectively halved by the presence of the interlayer (k_z) dispersion. Uncertainty in this parameter is the main source of possible errors in the calculations based on the ARPES data.The local magnetic field has been measured by placing the samples on top of high sensitivity (∼ 1kΩ/T) Hall-sensors patterned in epitaxial GaAs/AlGaAs heterostructures, forming 2D quantum wells. The magnetic field H_a was applied perpendicularly to the sample basal planes (ab). The Hall probe arrays with 10x10 μm^2 active area and pitch ranging from 35 to 25 μm have been used to determine the field distribution over a length span of ∼ 300 μ m. Depending on the sample dimensions, the crystal was shifted several times along the sensor line and a partial profile was recorded for every position. The complete profile has then been reconstructed by superimposing all partial measurements. Figure 3 displays, as an example, the magnetic field dependence of the local field, B_z, measured on a probe located close to the center of the sample (see discussion below) for the indicated temperatures, in a magnetic field perpendicular to the sample planes.In the Meissner state, no magnetic field penetrates into the crystal but even minute distance between the sample and the probe gives rise to a small initial slope, as indicated in Fig.3. This contribution has been removed prior to any further data treatment. The number of vortices in the sensor area - and, correspondingly, the local magnetic field B_z detected by the probe -suddenly starts to grow when the applied field, H_a, reaches the first penetration field, H_p. Finally, some flux remains trapped in the sample when H_a is turned back to zero leading to a finite remanent B value. This remanent field indicates the presence of some bulk pinning. Taking B ∼ 5 G over a sample width ∼ 100μm, one obtains a very small critical current on the order of 500 A/cm^2, highlighting the very good quality of the samples. The onset of the field penetration is very sharp and the presence of a small critical current does not put in question the determination of H_p. Note that an anomalous transverse Meissner effect has been observed for the tilted magnetic fields in the samples C, G and A <cit.> (labeled sample 1, 2 and 3, respectively). In the TDO measurements, the samples were attached to the end of a sapphire rod which was introduced in a coil of inductance L. Due to mutual inductance between the sample and the coil the resonant circuit of the LC oscillator (where L represents an inductor and C a capacitor) driven by the tunnel diode changes with the variation of the magnetic state of the sample. The variation of the magnetic penetration depth induces a change in L and hence a shift of the resonant frequency δ f(T)=f(T)-f(T_min) of a LC oscillating circuit (14 MHz) driven by a tunnel diode. This shift, renormalized to the one corresponding to the extraction of the sample from the coil (Δ f_0) is then equal to the volume fraction (δ V/V) of the sample which is penetrated by the field <cit.>. For Hc, δ V is related to the in-plane penetration depth λ_ab through some calibration constant that depends on the sample geometry. However, this constant can be altered by edge roughness effects (see discussion in <cit.>) introducing a wrong temperature dependence of the magnetic penetration depth. To avoid this, we have hence decided to perform all TDO measurements with Hab. In the following we show that even in this configuration the magnetic penetration depth probed is characteristic of λ_ab. Indeed, the surfaces parallel to the ab-planes are much flatter and δ V/V is, in this case, directly given by δ V/V ∼ 2(λ_c/w+λ_ab/d) ∼ 2/d×[λ_ab+(d/w)λ_c] without any geometrical correction (λ_c being the penetration depth parallel to the c-axis and d and w sample thickness and width respectively). Since d/w<<1, λ_ab+(d/w)λ_c ∼λ_ab in this weakly anisotropic system <cit.>. A typical temperature dependence of the frequency shift in the TDO measurements up to T_c is displayed in the inset of Fig.6 showing a very sharp decrease of Δ f for T<T_c, highlighting the high quality of the measured samples. Finally, specific-heat measurements have been performedusing an ac technique, as described elsewhere <cit.>. The ac-calorimetry technique consists of applying a periodically modulatedpower and measuring the resulting time-dependent temperature response. In our set-up, heating was provided by an optical fiber, and the temperature of the sample was recorded by a thermocouple, a precise in situ calibration of the thermocouples in magnetic field was included in the data treatment. We performed measurements at temperatures down to 0.7 K and in magnetic fields up to 2 T. In this paper, only the measurements with the magnetic field oriented in the c direction are considered. The electronic contribution to the specific heat Δ C_p/T=[C_p(T,H=0)-C_p(T,H>H_c2)]/T, together with the theoretical dependence for 2Δ/k_BT_c ∼ 3.7 is displayed in Fig.1 in sample E, as an example. Very similar results were obtained in sample F (not shown here). The specific-heat anomaly at T_c is very well resolvedin all samples, once again attesting for their high quality. For all samples Δ T_c/T_c is smaller than 0.08, Δ T_c being the transition width calculated between 10% and 90% of the specific heat anomaly. For the best sample, sample G, it is as small as 0.025.The specific-heat properties of samples A, B, C, and D were previously investigated into detail in <cit.> (same sample labeling); specific heat of the sample G was presented in <cit.>.§ RESULTS AND DISCUSSION §.§ Evidence for geometrical barriers in the vortex penetration mechanism Figure 4a displays the induced magnetic field B_z as a function of applied field H_a in sample E (as an example) for several different Hall-probe positions [see sketch in the inset of Fig.4b].The spatial profile of induced magnetic field can be reconstructed by taking the B_z values for each Hall-probe at a given H_a value [main panel of Fig.4b]. For small H_a values, the external field is shielded from all of the probes located below the sample (HP4 to HP14) and B = 0 (see the lowest -orange- profile for H_a = 10G). As H_a exceeds some critical value (H_p∼ 20 G), B starts to increase more or less abruptly giving rise to a dome-like magnetic field profile (green and grayprofiles). This profile is characteristic of low pinning materials <cit.> in the situation when the Meissner currents guide the vortices to the center of the sample. However, it is worth mentioning (see discussion below) that a partial penetration of the field is observed on HP4 and HP14 (edges on both sides of the sample) already for H_a∼13 G, i.e. for H_a<<H_p. Note also that the profile is slightly shifted towards the right side of the sample and the center of the dome does not match with the center of the sample. This is due to nonuniform distribution of the Meissner current density across the sample related to the slight thickness variation (the right side of the sample being slightly thinner).In low pinning samples, the penetration process is determined by two main barriers: the Bean-Livingston barriers due to the attraction of penetrating vortices to the sample surface <cit.> and the geometrical barriers related to the nonelliptical shape of the plateletlike sample <cit.>. In the former case, B=0 in the whole sample for H_a<H_p as the field penetrates only over a distance on the order of λ (see discussion in Ref.<cit.>). On the contrary, in the presence of geometrical barriers, the magnetic field first penetrates partially through the sample corners, creating tilted vortices stuck in the edges. These partial field penetration regions expand from the corners both inz direction (perpendicular to the sample surface) and towards the sample center. Vortices finally jump to the center of the sample as the top and bottom parts meet at the equatorial point (z=0) for H_a=H_p. As shown in Fig.4a, this partial penetration in the sample corners is clearly observed in our crystals, as a finite B value is measured on probe HP4 (and HP14, not shown here) for H_a values significantly smaller than for other probes, hence clearly indicating that the penetration process is dominated by geometrical barriers (see Refs.<cit.> and <cit.> for a detailed analysis of the field dependence of the profiles in the framework of the geometrical barriers theory <cit.>). §.§ Gap valuesIn the presence of geometrical barriers, H_p is directly proportional to H_c1, H_p=α H_c1, where α is a geometrical factor depending on the sample thickness to width ratio <cit.> and, neglecting a small temperature dependence of κ = λ/ξ, onehas:H_p(T)/H_p(0)≈λ^2(0)/λ^2(T)=1-2∫_Δ(T)^∞∂ f/∂ EE/√(E^2-Δ^2(T))dEwhere f is the Fermi function and Δ the superconducting gap. The corresponding temperature dependence of H_p(T)/H_p(0) for all investigated samples is reported in Fig.5 (for normalized temperatures). As shown, the data can be well fitted introducing two energy gaps (thick line) in a simple α-model <cit.>:2Δ_l/k_BT_c ∼ 3.7 and 2Δ_s/k_BT_c ∼ 2.4, both with similar weight. The presence of this small energy scale has been confirmed by TDO measurements. Indeed, as shown in Fig.6, the temperature dependence of the penetration depth can clearly be fitted by an exponential law attesting to the presence of a fully open superconducting gap with 2Δ_s ∼ 2.4k_BT_c (in sample A as an example). Very similar results have been obtained in sample B (2Δ_s/k_BT_c ∼2.5), C (2Δ_s/k_BT_c ∼2.8), and G (2Δ_s/k_BT_c ∼2.4), attesting to the presence of a small gap for all doping contents, in good agreement with the temperature dependence of the lower critical field. The presence of this small gap is, to some extent, consistent with the μSR data <cit.>. However, in contrast with the present measurements, which do not show any significant change of the coupling ratios with doping (2Δ_s ∼ [2.4-2.8]k_BT_c), the μSR experiments suggested a clear increase of the coupling ratio of the small gap withCu content, leading to the merging of the two energy scales for optimal doping. Surprisingly, the observation of this small gap in magnetic measurements is in striking contrast with the result obtained in the specific-heat measurements. Indeed, in <cit.> some of us showed that the temperature dependence of the specific heat of samples A, B, C, and D can be well described, introducing one single coupling ratio 2Δ = [3.7-3.9]k_BT_c for all copper concentrations, in agreement with the thermal conductivity measurements <cit.>, which were suggesting that this system is a conventional single-gap superconductor. This fact is further supported by our present heat capacity measurements on samples E (see Fig.1) and F (not shown here). Indeed, even if the presence of the small gap (Δ∼ k_BT_c, with the contribution weight of less than 10%) can not be fully excluded, the temperature dependence of the electronicspecific heat Δ C_p/T=[C_p(T,H=0)-C_p(T,H>H_c2]/T can be well described by a single gap model with 2Δ∼ 3.7k_BT_c (see solid line in Fig.1). Note that taking 2Δ∼ 3.7k_BT_cleads to only very poor agreement with the experimental TDO data (see dashed black line in Fig.6) or with the temperature dependence of H_p (see dashed line in Fig.5). While TDO is a surface sensitive method, Hall probe measurements are probing bulk properties; thus the difference between the surface and the bulk cannot explain our findings.The explanation of such a discrepancy remains missing but it is worth noting that the magnetic measurements are probing the gap structure in the ab-plane, whereas the specific-heat measurements are averaging the gap structure over all k-directions and that the specific-heat measurements are mainly sensitive to heavy quasiparticles (γ∝ m^*), whereas the magnetic measurements are mainly probing the light quasiparticles (1/λ^2 ∝ 1/m^*). This suggests a strong anisotropy of the effective mass over the Fermi surface and that the amplitude of the gap is strongly related to the effective mass. However, the temperature dependence of H_c1 (and hence the gap distribution) is very similar to the one previouslyobserved in 2H-dichalcogenides (NbSe_2 or NbS_2 <cit.>, open symbols in Fig.5) despite very different electronic structures (see <cit.> and <cit.>, respectively). Note also that, in those later systems, the multi-gap structure observed in magnetic measurements has been confirmed by both specific heat <cit.> and tunneling spectroscopy <cit.> measurements. The influence of the presence of a CDW leading to a strong k-dependence of the electron phonon coupling <cit.> (and hence of the gap) first seemed also to be excluded since this CDW is present only in NbSe_2 (and not NbS_2), but recent measurements clearly showed a strong softening of the phonon modes in some directions even in NbS_2 <cit.>. The origin of the superconducting dome in Cu-TiSe_2 remains unclear. First-principles calculations emphasiszed the possible role of electron-electron correlations in the coupling constant of TiSe_2 (and MoS_2 flakes) <cit.>. On the other hand, x-ray experiments performed on TiSe_2 single crystals <cit.> showed that the end point of the CDW region occurs for pressures (∼ 5 GPa) being much larger than the pressures over which a superconducting dome is observed (∼ 2-3.5 GPa). Thus the superconducting dome is probably not directly related to a quantum critical point corresponding to the vanishing of the CDW phase. On the other hand, some incommensurability of CDW was observed in the superconducting region, suggesting that superconductivity could be related to the formation of CDW domain walls. This idea is further supplemented by the observation of CDW incommensurability also in Cu-doped samples by x-ray <cit.> and by optical measurements <cit.>. Note that the correlation length of the CDW (the size of the incommensurable domains) in the c direction was reported to be on the order of 22 unit cells <cit.>, which is strikingly similar to the superconducting coherence length. In high-T_c superconductors the lock-in effect accommodates due to the interlayer distance being larger than, or at least on the order of the coherence length. A domain superstructure with exactly this scale could be the reason for this effect in Cu_xTiSe_2. Note also that observation of the lock-in effect <cit.> in this system points to a strong variation of the line tension (i.e., superfluid density) along the c direction. Even if this scenario requires further consideration, strong modulations of the superconducting parameter might probably lead to different"gap measurements" depending on the probe used to determine this gap.After 30 years of intensive research, the superconducting-gap structure still remains a hot topic in cuprates. Recently, Bruér et al. has shown that in YBa_2Cu_3O_7-δ the tunneling spectrum gets parallel contributions from a two-dimensional band structure where beside a conventional BCS d-wave pairing gap an additional small "gap" is revealed coming from unpaired states <cit.>. A large body of experimental data suggesting a coexisting two-gap scenario, i.e., superconducting gap and pseudogap, over the whole superconducting dome in several classes of cuprates has been collected <cit.>. While the latter scenario seems to be excluded in Cu_xTiSe_2 the detailed study of differentspatially sensitive channels are to be addressed. §.§ Critical fields and condensation energy The critical fields H_c1 and H_c2, and corresponding values of penetration depth and coherence length of the different samples were collected and are listed in Table I. The H_c2(0) values were derived from the specific-heat measurements. The data for crystals A, B, C, and D have been taken from <cit.> and are supplemented by measurements on the other samples. The values of H_c1(0) are directly derived from H_p(0) introducing the α correction displayed in the table. In order to prove the accuracy of the λ values deduced from our H_p measurements we performed a thermodynamic consistency check (see also <cit.>). The density of condensation energy, μ_0H_c^2/2 is related to the density of states at the Fermi level g(E_F) through μ_0H_c^2/2 ∼ g(E_F)Δ^2/2 (H_c = Φ_0/(μ_02√(2)πλ(0)ξ(0)) being the thermodynamic field). Introducing the Sommerfeld coefficient (γ) through γ=π^2k_B^2g(E_F)/3 and taking 2Δ∼ [3.7-3.9]k_BT_c, one obtains γ∼ (1.6±0.5).10^9/(λ(0)[nm]ξ(0)[nm]T_c[K])^2 ∼ 6±2 mJ/molK^2 from our measurements. This value is in reasonable agreement with the measurements of the heat capacity performed by Morosan et al. <cit.>, giving γ∼ 4-5 mJ/molK^2, hence validating our results. Figure 7 displays an evolution of the critical fields with the critical temperature of the samples. As shown in panel (a), H_c2 roughly scales as T_c^n, with n ∼ 1 (dashed line) and ∼ 2 (solid line) for the underdopedand the overdoped samples, respectively, suggesting that the system is in the dirty (H_c2∝ 1/(ξ_0 l) ∼Δ/(v_F l) ∼ T_c) and clean (H_c2∝ 1/ξ^2 ∼Δ^2/v_F^2 ∼ T_c^2) regimes, respectively. The T_c dependence of H_c1 [panel (b)], and subsequently of H_c (panel (c)), is much more puzzling. Indeed, in conventional superconductors, one expects thatn∼ 1 and n ∼0 for H_c1 in the dirtyand clean limits, respectively (in the dirty limit λ∝λ_L √(ξ_0/l) and H_c1∝ /λ^2 ∼ l/λ_L^2ξ_0 ∼T_c). Then, the T_c dependencies for H_c2 and H_c1 lead to H_c ∝√(H_c1H_c2)∼ T_c whatever the disorder. Our measurements, however, suggest that H_c follows rather H_c ∝ T_c^1.5 dependence; see dashed line in panel (c). For a comparison, theH_c ∝ T_c is shown by the dash-dotted line as well. Such a surprising T_c dependence ofH_c has been reported in iron-based materials (see for instance <cit.>) and could be the signature of either a strong pair-breaking effect or the presence of superconducting quantum critical points in the vicinity of the end point of the superconducting dome. The former possibility can here be excluded as strong pair-breaking effects are expected to lead to some power-law dependence of the superfluid density, in striking contrast with our measurements.Note that the important change in H_c1 with doping cannot be attributed to the change in the carrier concentration (n ∝ 1/λ^2).Finally, we compare values of λ, as well as ξ, derived from the lower and upper critical fields with those from ARPES measurements. ARPES supplies the London penetration depth λ_L and the Pippard coherence length ξ_0 directly from electronic band structure. On the other hand, the quantities λ and ξ obtained from H_c1 and H_c2 are affected by the mean free path of the electrons, l. In the dirty limit they are related through formulas ξ_0=ξ^2/0.731l and λ_L=λ√(1.33l/ξ_0). In order to calculate λ_L we need to take into account the shape of the Fermi surfaceand Fermi velocity v_F<cit.>. Here we get an estimate of λ_L=150±50 nm. Assuming the size of the superconducting gap Δ=0.6 meV, we estimate the coherence length ξ_0 using the formula ξ_0 = ħ v_F / (πΔ), which results in ξ_0=40±8 nm. These values were obtained on a sample with x=0.07, the concentration between those of samples B and G. In order to compare the samples with similar copper concentrations, we interpolated the values of the penetration depth and coherence length from Table I and obtained λ=220 nm and ξ=20 nm that would correspond to a sample with the same copper concentration as the one from ARPES.Taking these values, using the formulas from above, we obtained consistent results of the mean free path l=13.5 nm from both λ and ξ. This confirms that such an underdoped sample is indeed in the dirty limit, as suggested by the evolution of H_c2. We would like to point out, that it is remarkable that in such a complicated system with CDW and shallow electronic bands, like Cu_xTiSe_2, we could arrive at very good agreement between two completely independent experimental approaches. It shows that a rather simple Fermi-liquid-like approach works well also in this complicated system, yet leaving an open question about where, in such smooth band structure, the two energy gaps could reside.§ CONCLUSIONS We have highlighted a surprising discrepancy between magnetic and thermodynamic measurements which led to seemingly contradictory results. Indeed, whereas the latter clearly suggested that this system is a conventional superconductor with 2Δ_l ∼ [3.7-3.9]k_BT_c, the temperature dependence of the superfluid density (for T→ 0) deduced from magnetic measurements (both HPM and TDO) clearly shows the existence of a much smaller gap 2Δ_s ∼ [2.4-2.8]k_BT_c. Our measurements are also pointing out a surprising dependence of the condensation energy density on T_c.Thisworkwas supportedby EU ERDF (European regional development fund) Grant No. ITMS26220120005, by the Slovak Research and Development Agency,under Grant No. APVV-14-0605, by the Slovak Scientific Grant Agency under Contract No. VEGA-0149/16 and VEGA-0409/15, by the U.S. Steel Košice, s.r.o., and by the French National ResearchAgency through Grant No. ANR-12-JS04-0003-01 SUBRISSYME.99CO-Cuprates T. Wu, H. Mayaffre, S. Kramer, M. Horvatic, C. Berthier, W. N. Hardy, R. Liang, D. A. Bonn, M.-H. Julien, Nature 477, 191 (2011); G. Ghiringhelli, M. Le Tacon, M. Minola, S. Blanco-Canosa, C. Mazzoli, N. B. Brookes, G. M. De Luca, A. Frano, D. G. Hawthorn, F. He, T. Loew, M. Moretti Sala, D. C. Peets, M. Salluzzo, E. Schierle, R. Sutarto, G. A. Sawatzky, E. Weschke, B. Keimer, L. Braicovich, Science 337, 821 (2012); J. Chang, E. Blackburn, A. T. Holmes, N. B. Christensen, J. Larsen, J. Mesot, R. Liang, D. A. Bonn, W. N. Hardy, A. Watenphul, M. v. Zimmermann, E. M. Forgan and S. M. Hayden, Nat. Phys. 8, 871 (2012). Valla T. Valla, A. V. Fedorov, P. D. Johnson, P.-A. Glans, C. McGuinness, K. E. Smith, E. Y. Andrei, and H. Berger, Phys. Rev. Lett. 92, 086401 (2004); T. Kiss, T. Yokoya, A. Chainani, S. Shin, T. Hanaguri, M. Nohara, and H. Takagi, Nat. Phys. 3, 720 (2007). Exciton H. Cercellier, C. Monney, F. Clerc, C. Battaglia, L. Despont, M. G. Garnier, H. Beck, P. Aebi, L. Patthey, H. Berger, and L. Forró Phys. Rev. Lett. 99, 146403 (2007); J. van Wezel, P. Nahai-Williamson, and S. S. Saxena, Phys. Rev. B 83, 024502 (2011). Xray-TiSe2 A. Kogar, G.A. de la Pena, S. Lee, Y. Fang, S.X.-L. Sun, D. B. Lioi, G. Karapetrov, K.D. Finkelstein, J.P.C. Ruff, P. Abbamonte, and S. Rosenkranz, Phys. Rev. Lett. 118, 027002 (2017). Morosan E. Morosan, H.W. Zandbergen, B.S. Dennis, J.W.G. Bos, Y. Onose, T. Klimczuk, A. P. Ramirez, N.P. Ong, and R.J. Cava, Nat. Phys. 2, 544 (2006). elect_struc M. Cazzaniga, H. Cercellier, M. Holzmann, C. Monney, P. Aebi, G. Onida, and V. Olevano, Phys. Rev. B, B 85, 195111 (2012); D. Qian, D. Hsieh, L. Wray, E. Morosan, N. L. Wang, Y. Xia, R. J. Cava, and M. Z. Hasan, Phys. Rev. Lett. 98, 117007 (2007). Li 2007 S.Y. Li, G. Wu, X.H. Chen, and Louis Taillefer, Phys. Rev. Lett.99, 107001 (2007). Kacmarcik J. Kačmarčík, Z. Pribulová, V. Pal'uchová, P. Szabó, P. Husaníková, G. Karapetrov, and P. Samuely, Phys. Rev. B 88, 020507(R) (2013). Zaberchik M. Zaberchik, K. Chashka, L. Patlgan, A. Maniv, C. Baines, P. King, and A. Kanigel, Phys. Rev. B 81, 220505(R) (2010). lock-in Z. Medvecká, T. Klein, V. Cambel, J. Šoltýs, G. Karapetrov, F. Levy-Bertrand, B. Michon, C. Marcenat, Z. Pribulová, and P. Samuely, Phys. Rev. B 93, 100501(R) (2016). Oglesby C.S. Oglesby, E. Bucher, C. Kloc, H. Hohl, J. Cryst. Growth 137, 289 (1994). Brandt E.H. Brandt, G.P. Mikitik, and E. Zeldov, J. of Exp. and Theor. Phys. 117, 439 (2013). DienerP. Diener, P. Rodière, T.Klein, C.Marcenat, J.Kačmarčík, Z. Pribulová, D.J. Jang, H.S. Lee, H.G. Lee, S.I. Lee, Phys. Rev. B 79, 220508(R) (2009). Klein T. Klein, D. Braithwaite, A. Demuer, W. Knafo, G. Lapertot, C. Marcenat, P. Rodière, I. Sheikin, P. Strobel, A. Sulpice, and P. Toulemonde, Phys. Rev. B 82, 184506 (2010). Sullivan P. F. Sullivanand G. Seidel, Phys. Rev. B 173, 679 (1968). Levy F. Levy-Bertrand, B. Michon, J. Marcus, C. Marcenat, J. Kačmarčík, T. Klein, H. Cercellier, Physica C 523, 19 (2016). Zeldov E. Zeldov, A.I. Larkin, V.B. Geshkenbein, M. Konczykowski, D. Majer, B. Khaykovich, V.M. Vinokur, H. Shtrikman, Phys. Rev. Lett. 73, 1428 (1994). Bean-Livingston C.P. Bean and J.D. Livingston, Phys. Rev. Lett. 12, 14 (1964). Pribulova Z. Pribulová, Z. Medvecká, J. Kačmarčík, V. Komanický, T. Klein, P. Husaníková, V. Cambel, J. Šoltýs, G. Karapetrov, and P. Samuely, Acta Phys. Pol. A 126, 370 (2014). Brandt2 E.H. Brandt, Phys. Rev. B 59, 3369 (1999). NbS2 P. Diener, M. Leroux, L. Cario, T. Klein, and P. Rodière, Phys. Rev. B 84, 054531 (2011); J. D. Fletcher, A. Carrington, P. Diener, P. Rodière, J. P. Brison, R. Prozorov, T. Olheiser, and R. W. Giannetta, Phys. Rev. Lett. 98, 057003 (2007). Padamsee H. Padamsee, J.E. Neighbor, and C. Shiffman, J. Low Temp. Phys. 12, 387 (1973). elect_struc_NbSe2 S. V. Borisenko, A. A. Kordyuk, V. B. Zabolotnyy, D. S. Inosov, D. Evtushinsky, B. Büchner, A. N. Yaresko, A. Varykhalov, R. Follath, W. Eberhardt, L. Patthey, and H. Berger, Phys. Rev. Lett. 102, 166402 (2009); D. J. Rahn, S. Hellmann, M. Kalläne, C. Sohrt, T. K. Kim, L. Kipp, and K. Rossnagel, Phys. Rev. B 85, 224532 (2012). Kacmarcik2 J. Kačmarčík, Z. Pribulová, C. Marcenat, T. Klein, P. Rodière, L. Cario, and P. Samuely, Phys. Rev. B 82, 014518 (2010).SpectroNbS2 I. Guillamon, H. Suderow, S. Vieira, L. Cario, P. Diener, and P. Rodière, Phys. Rev. Lett. 101, 166407 (2008). Leroux1 M. Leroux, I. Errea, M. Le Tacon, S.-M. Souliou, G. Garbarino, L. Cario, A. Bosak, F. Mauri, M. Calandra, and P. Rodiere, Phys. Rev. B 92(2015)140303.Leroux M. Leroux, M. Le Tacon, M. Calandra, L. Cario, M-A. Measson,P. Diener, E. Borrissenko, A. Bosak, and P. Rodiere, Phys. Rev. B 86, 155125 (2012). Das-EE T. Das and K. Dolui, Phys. Rev. B 91, 094510 (2015). Joe Y. I. Joe, X. M. Chen, P. Ghaemi, K. D. Finkelstein, G. A. de la Pena, Y. Gan, J. C. T. Lee, S. Yuan, J. Geck, G. J. MacDougall, T. C. Chiang, S. L. Cooper, E. Fradkin and P. Abbamonte, Nat. Phys. 10, 421 (2014). Lioi D.B. Lioi, R.D. Schaller, G.P. Wiederrecht, and G. karapetrov, arXiv:1612.01838v1. BruerJ. Bruér, I. Maggio-Aprile, N. Jenkins, Z. Ristič, A. Erb, C. Berthod, O. Fischer and C. Renner, Nat. Commun. 7:11139 doi: 10.1038/ncomms11139 (2016). Hufner S. Hufner, M.A. Hossain, A. Damascelli, and G.A. Sawatyky, Rep. Prog. Phys. 71 (2008) 062501. thermoconsistency T. Klein, P. Rodière, and C. Marcenat, Phys. Rev. B 86, 066501 (2012). Ni-doped P. Rodière, T. Klein, L. Lemberger, K. Hasselbach, A. Demuer, J. Kačmarčík,Z.S. Wang, H.Q. Luo, X.Y. Lu, H.H. Wen, F. Gucmann, and C. Marcenat, Phys. Rev. B 85, 214506 (2012).Evtushinsky D.V. Evtushinsky, D.S. Inosov, V.B. Zabolotnyy, M.S. Viazovska, R. Khasanov, A. Amato, H.-H. Klauss, H. Luetkens, Ch. Niedermayer, G. L.Sun, V. Hinkov, C.T. Lin, A. Varykhalov, A. Koitzsch, M. Knupfer, B. Büchner, A. A. Kordyuk, and S.V. Borisenko, New J. Phys. 11, 055069 (2009).
http://arxiv.org/abs/1704.08463v2
{ "authors": [ "Z. Pribulová", "Z. Medvecká", "J. Kačmarčík", "V. Komanický", "T. Klein", "P. Rodière", "F. Levy-Bertrand", "B. Michon", "C. Marcenat", "P. Husaníková", "V. Cambel", "J. Šoltýs", "G. Karapetrov", "S. Borisenko", "D. Evtushinsky", "H. Berger", "P. Samuely" ], "categories": [ "cond-mat.supr-con" ], "primary_category": "cond-mat.supr-con", "published": "20170427075745", "title": "Magnetic and thermodynamic properties of Cu$_x$TiSe$_2$ single crystals" }
The utility of aBayesian analysis of complex models and the study of archeological 1̧4 data. Department of Statistics, The Hebrew University of Jerusalem& Department of Statistics, University of Michigan, Ann Arbor[Department of Statistics, University of Michigan, 1085 South University, Ann Arbor, MI 48109-1107; [email protected]]December 30, 2023 ====================================================================================================================================================================================================================================================== The paper presents a critical introduction to the complex statistical models used in 1̧4 dating. The emphasis is on the estimation of the transit time between a sequence of archeological layers. Although a frequentist estimation of the parameters is relatively simple, confidence intervals constructions arenot standard as the models are notregular.I argue that that the Bayesian paradigm isa natural approach to these models. It is simple, and gives immediate solutions to credible sets, with natural interpretation and simple construction. Indeed it is the standard tool of 1̧4 analysis. However and necessarily, the Bayesian approach is based on technicalassumptions that may dominate the scientific conclusion in a hard to predict way.Iexemplify the discussion in two ways. Firstly, I simulate toy models.Secondly, I analyzea particular data set from the Iron Age period in Tel Rehov. These dataare important to the debate on the absolute time of the Iron Age I/IIA transition in the Levant, and in particular to the feasibility of the Bible story about the United Monarchy of David and Solomon. Our conclusion is that the data in question cannotresolve this debate. § INTRODUCTIONStatistical inference is built on the assumption that the observed data tells usabout the parameters of interest. Typically, this is translated into a statement that the data are random, and their distribution function depends on real unknown parameters. In the context of radiocarbon data, this is, for example, the statement that the laboratory carbon dating is well approximated by a normal random variable with unknown mean and unknown variance. The term random variable here is understood in the sense of a counterfactual. If the lab conducts the same experiment again and again, with similar specimens, the result of each experiment will be different, with actual distribution as prescribed by the statistical model. The important thing is that to a large extent this (statistical) model is objective, based on the physics of the experiment, and can be verified by empirical study (i.e., repeating the measurements).In fact, the sort of measurements used in radiocarbon dating have a built-in repetition mechanism, e.g., the days in counting measurements.The Bayesian paradigm builds over this another layer of randomness, where the unknown parameters themselves are considered random, but this randomness has a different meaning than that of the measurement random error. It is a way to describe what the data analyzer thought and knew about these parameters before observing the data. It is the a priori knowledge on the subject matter. This information is summarized in the a priori distribution. The Bayesian approach may be not the dominant approach to applied statistics, but it is the ruling one in radiocarbon archaeological dating (Steier and Rom 2000; Manning 2001; Bayliss and Bronk Ramsey 2004;Bronk Ramsey 2009). The standard statistical packages used in archaeology are Bayesian: The OxCal (Bronk Ramsey and Lee 2013), and the BCal (Buck, Christen and James 1999).The first randomness can be simple, justifiable, and safe. However, the second, the a priori distribution, typically lacks objectivity, describes a unique situation, and cannot be verified experimentally.Whatever the data analysis is, the researcher should be careful to ensure that the analysis is robust in the sense that the conclusions are derived from data and not from non-valid assumptions (Berger and Berliner 1986; Berger 1985 ch. 4.7). In the context of radiocarbon dating of archeological sites, distribution assumptions enter in many ways, In particular we should have * Assumptions about the distribution of the laboratory measurements error (typically and for convenience assumed to be normal).* Assumptions about the calibration curves used (see below).* Assumptions about the archeological context, stratigraphic sequences, and the time order of different findings.* Implicit a priori assumptions on the dates involved given the archeological constraints.The conceptually difficult assumptions are those of point <ref>, and similarly, but to less extent, those of point <ref>. They are done as a matter of fact without much regard to their implications. Typically, they are just part of the algorithm (e.g., using OxCal or BCal in archeology), and the subject matter scientist is not aware of them.In Ritov et al. (2014) weargued that Bayesian priors for complex models should be chosen with special care. They may lead to biased estimators in unpredicted ways. Although it is usually true that good estimators can be based on well chosen priors, the prior can be justified only by sophisticate investigation of the theoretical behavior of the resulting estimators. Moreover, we argued that the fact that a prior formalizesreasonable assumptions on reality is not enough, and a plausible prior may lead to a counter-intuitive bias. A good prior for one scientific question (e.g., the starting time of a period), may yield a bad answer for a different question (e.g., its length), even if both answers are based on the same data.In the current paper, I try to investigate the impact of prior assumptions on the analysis of the 1̧4 complex data from archeological strata. The Bayesian approach is a simple way to introduce any theory the researcher has in addition to the radiocarbon data. The results of the analysis presented by a Bayesian researcher do not differentiate between this theory and hard evidence.Any theory used in the analysis modifies the conclusions, otherwise there was no rationale to introduce it. The dating of a layer by an archeologist who believesthe Biblical text is reliable,should be different from the conclusions of the analysis of the same data by a researcher who believes that story of the United Monarchy (of David and Solomon) was invented by a later regime. Of course, this is not the way the Bayesian paradigm is used in practice, but then, I believe, it loses its philosophical grounds. The structure of this paper is the following. The impact of different statistical approaches to the the analysis of this type of data is exemplify in Section<ref>. This is done by the presentation ofsimple models and their simulations. In Section <ref>,a somewhat naive introduction to the analysis of thecalibration curve is given, and it is argued that the standard Bayesian approach may be problematic. In Section <ref>, I present the data to be analyzed later, that of Tel Rehov, where a new parallel analysis of the data using Bayesian and non-Bayesian methods is given, and in particular we argue that a standard statistical approach may dictate a simpler analysis of the calibration curve than is used in practice. Finally some concluding remarks are given in Section <ref>. § TEL REHOV The calibration curve and Tel Rehov. (Mazar et al 2005:213 Mazar and Streit 2014.Outliers in Tel Rehov (Mazar and Streit, 2014),Sharon phase I (out of?): Putting in context, comparing to Tel Dor, argument against the low chronology.The importance of the calibration curve.The effect of calibration curve and assumption of a gap helps, so they say, to push the boundary. The calibration curve seems (the graph is given) to be non-linear.Taking the 68% too seriously,and ignoring the 20%.Analysis of outliers, essentially by identification and exclusion. More interesting is focusing only on data relevant to the transistion I/IIABruins et all (2005). "Essentially, a succession of 'Phases' and 'Boundaries' needs to be defined in correct stratigraphic order (note: 'Boundaries' are horizons we wish to be able to date between the identified 'Phases' with their C14 dating information"). Phases are periods of times that cannot overlap. The C14 is pushed to its very limit of resolution (with reference to van der Plicht and Bruings 2001). They claim that simple BP analysis is misleading. Outlier should have a little impact in holistic approach.Bruins et all Science 2003.The importance of the chronology, and what was considered as a lack of precision of the c14 dating.Short summary of the Iron I|IIA period in thel Rehov. A short summary of the OxCal program.Mazar et al. Ladder of time. Very detailed description of the finding by locus.Mazar (in Hebrew). A detailed description.Van der Plicht and Bruins (in Levy and Higham). Quality control, describing the Tel Rehov measurement accurcy and problems against that may cause gross error (a nice detailed list).Boaretto, Jull, Gilboa Sharon, 2005. Dor vs. Rehov. Dark age in Greek and Cyprus.High vs Low chronology. Is C14 precise enough?Precise processing in two labs. The actual SD is larger than the quoted. Careful analysis or error in single measurement. "Can 14C Dating Resolve the Levantine Iron Age Dilemma? The obvious answer here is yes, provided that enough measurements are available. Given the current state-of-the-art technology in archaeological seriation, 14C accuracy and precision, and statistical modeling, the investigation of the chronology of the eastern Mediterranean in this period will have to be based on numerous, replicated dates, taken from different sites and measured by different procedures. A large, replicated data set is the only way to overcome the inevitable noise in the model."Finkelstein and Piasetzky, 2010. Again the big Israel iron I/II transition. They selected data to be careful (only short-lived, well defined stratigraphically and ceramically, stringent rejection of outliers (5-sigma!). They claim that Mazar and Bronk Ramsey weren't as careful, and they don't like their "mathematical procedure".They also claim that MBR ignore the big picture of related data from Tel Rehov and other sites, and their selection criteria of data and sites is doubtful.Model I includes data from the periods just before and just after the Iron I/II transition (many sites). The analysis is OxCal assuming a short transition time.In model II they assume a gradual transition.They average data from the same time.Mazar andBronk Ramsey. Response. The I/IIa transition was a long process.Gilboa, Sharon, an Boaretto, Tel-Dor Iron age. Transitions, periodization.Lee, Bronk Ramsey, Mazar: Trapezoid method.Mazar: The debate over the chronlogy of Iron Age (2005). A change of paradigm from HC to LC of Finkelstein and his predecessors;There are two anchors,  1140/30 the end of the Egyptian presence to the Assyrian conquers of 732–701 BCE.In the middle there are 400 years. Rehov may be related to Shishak. For him, there are 3 anchors withing the periods (Ard, Negev, taanach) that makes Finkelstein's LC impossible.§ ARCHEOLOGICAL BAYESIAN ANALYSIS: SIMULATIONSIn this section I will consider a sequence of three simple and artificial examples which cover the ingredient of the analysis of the data from Tel Rehov starting with almost the simplest possible model. In all the cases I would check to what extent the conclusions are sensitive to the technical assumptions. I use simulations, thus it is easy to evaluate the performance of the estimators as the truth is knwon. §.§ A transition between two periods In a typical situation data is collected from two consecutive strata. Findings from the two strata are dated. The real archaeological question may be the time of the transition between the two periods represented by the strata. In the following I exemplify that sometimes data that areirrelevant to the archaeological question may influence quite heavily the Bayesian conclusion.Consider the following example. Stratum I started at the known time t_s=1100 BCE and ended at the unknown time τ. Stratum II started at τand ended at the known time t_e=900 BCE. We have two observations at time 1000 BCE, one from each stratum.From stratum I there were also K-1 measurements fromsources originated at time 1100, while from stratum II we had M-1 measurements from around time 900. All measurements were with standard error equal to 10 years.We want to estimate the transition time τ.A frequentist would say that given the two data points at 1000, τ should be somewhere between 1020 to 980BCE,as it should be no more than two standard deviations from these points. But if so, the K+M-2 remote points are at least three standard deviation from τ and hence tell us almost nothing about its value. By the symmetry ofthe likelihood, he would estimate τ by 1000 and the confidence interval should be symmetric around this value.Consider now the Bayesian analysis. It seems natural to assume that τ has a prioria uniform distribution on the interval (t_s,t_e), and given τ,the distribution of the particular dates would be independent and uniform on (t_s,τ ) and (τ ,t_e) respectively. On the other hand, it seems reasonable to assume that the estimate of τshould be based only on the two measurements at 1000 BCE. The rest of the measurements seem to be irrelevant to the dating of τ. They are simply too remote from the boundary. However, this is not so. In Figure <ref>, I plotted the a posteriori credible intervals as a function of M, forK=5. Theremote observations influencedramatically the credible sets isbecause the uniform distribution on (t_s,τ ) has density which is equal to 1/(τ -t_s ). Since we have K observations from this interval we get a factor of (τ -t_s )^K, i.e., the total number of observations , K, from stratum I has an influence on the Bayesian conclusion. Similarly, there is a factor of (t_e-τ)^M factor due to Stratum II. The latter factor favors short Stratum II time, and thus as number of remote young findings increases, the a posteriori distribution tends to shorter second periods.Now, the flat prior of τ can be correctedto balance out these factors in the likelihood, i.e. we could use a prior which is proportional to (τ -t_s)^K (t_e-τ )^M. This would yield a reasonable solution to the specific findings with the specific values of M and K, at the price of a very informative prior, which would not fit any other findings in this area. Roughly speaking, any admissible statistical procedure can be constructed using some priors, but they may be too ad hock, and should fit exactly the particular scientific questions asked. In particular, answers to different scientific questions should be based on different priors even if all of them are based on the same lab measurements(Ritov et al. 2014).There is an intermediate way which is the empirical Bayes approach. In this approach we can assume that Y_ij N(τ_ij,_ij^2), and τ_ijπ_i, i=1,2, j=1,…,n_i, where _ij are known,τ_ij and π_i are unknown except that π_1⊆ [t_s,τ] and π_2=[τ,t_e]. Statistically, deconvolution is difficult, and henceπ_1 and π_2 can be estimated only in a very slow rate. However,estimating of the smooth π_i*N(0,_ij^2) is easy, and estimation of the Bayes procedure is again an easy task. In Figure <ref> we presents an analysis of the example. The distributions π_1 and π_2 where estimating by (an approximate) nonparametric MLE (on a grid) using the EM algorithm. §.§ A sequence of eventsI consider now the timing of a sequence of events μ_1,μ_2,…,μ_M, and it is assumed that we have independent radiocarbon dating of these events denoted by Y_1,Y_2,…,Y_M respectively. It is assumed that Y_m is Gaussian with mean μ_m and known standard deviation _m. We assume that it is well established that the times are ordered such that μ_1<μ_2<…<μ_M. For simplicity, we add that it is known that t_s<μ_1 and μ_M<t_e. To simplify notation, we define μ_0=t_s and μ_M+1=t_e.There is no simple non-Bayesian solution for this problem. Finding the maximum likelihood estimator, the values of the parameters that maximize the value of the joint probability density at the observations, and obeying the order restriction is not difficult, although the solution has no close form and should be found by a simple numerical algorithm. However, the construction of good confidence intervals is not trivial, e.g., because the model is not a regular parametric model and the data distribution depends heavily on the values of the parameters. Constructing confidence intervals which do not use the order restrictions is easy, but seems to be an inefficient use of the data. Bayesian estimators and credible intervals seem to be just the right solution. Of course, the price is a strong dependency on implicit assumptions built into the prior.It may seem natural to use “non-informative” prior, that is, to assume that if μ_m-1 and μ_m+1 are known, then μ_m is uniform in the interval (μ_m-1,μ_m+1).However, the order restriction is quite tight. This prior prescribes that a priori (μ_m-t_s)/(t_e-t_s)is a beta random variable with parameters α=m and β=M-m+1. In particular, its mean is t_s+m/M+1(t_e-t_s) and its standard deviation is t_e-t_s/M+1√(m(M-m+1)/M+2) . If t_s=3150 BP, t_e=2850 BP, M=10, and m=2 then a priori it is assumed that with probability of 0.95, t_2is between 3142 to 3016 BP. The a posteriori distribution is complicated. It is hard to compute numerical integration in high dimensions, and it is not simple to sample efficiency random sample directly from the posterior. Luckily there is an off-the-shelf procedure that enables to sample a Markov chain from the posterior. The Markov Chain Monte Carlo (MCMC) is the main numerical tool of Bayesian statistics, cf. Hastings (1970), Geman and Geman (1984), and Gelman et al (2014). As an example, I simulated the following situation. 10 samples were taken, known to be in the interval 3150 BP to 2850 BP. The actual values were a sequence of 10 years apart from 3140 BP to 3050 BP. The observations were Gaussian with =30. The Bayesian estimate are based on Monte Carlo Markov chain with K=1,000,000 steps. The results are given in Table <ref>.Clearly, the Bayesian estimate improves over the raw estimate of Y itself. The credible intervals cover the true value in all cases. However, in a similar example, where M was increased to 30 and have the real times being monotone but not evenly spaced,a clear bias is introduced by the “non-informative” prior as is demonstrated in Figure <ref>.§.§ A sequence of eventsWe consider now the timing of a sequence of events μ_1,μ_2,…,μ_M, and we suppose that we have independent radiocarbon dating of these events, Y_1,Y_2,…,Y_M. We assume that Y_m is Gaussian with mean μ_m and known standard deviation _m. We assume that it is well established that the times are ordered such that μ_1<μ_2<…<μ_M. For simplicity, we add that it is known that t_s<μ_1 and μ_M<t_e. To simplify notation, we define μ_0=t_s and μ_M+1=t_e.There is no simple non-Bayesian solution for this problem. Finding the maximum likelihood estimator, the values of the parameters that maximize the value of the joint probability density at the observations, and obeying the order restriction is not difficult, although the solution has no close form and should be found by a simple numerical algorithm. However, the construction of confidence intervals is not simple, e.g., because the model is not a regular parametric model and the distribution depends heavily on the values of the parameters. Constructing confidence intervals which do not use the order restrictions is easy, but seems to be inefficient use of the data. Bayesian estimation and credible intervals seem to be just the right solution. Of course, the price is a strong dependency on implicit assumptions built into the prior.It may seem natural to use “non-informative” prior, that is, to assume that if μ_m-1 and μ_m+1 are known, then μ_m is uniform in the interval (μ_m-1,μ_m+1).However, the order restriction is quite tight. This prior prescribes that a priori (μ_m-t_s)/(t_e-t_s)is a beta random variable with parameters α=m and β=M-m+1. In particular, its mean is t_s+m/M+1(t_e-t_s) and its standard deviation is t_e-t_s/M+1√(m(M-m+1)/M+2) . If t_s=3150 BP, t_e=2850 BP, M=10, and m=2 then a priori it is assumed that with probability of 0.95, t_2is between 3142 to 3016 BP.As an example, we simulated the following situation. 10 samples were taken, known to be in the interval 3150 BP to 2850 BP. The actual values were a sequence of 10 years apart from 3140 BP to 3050 BP. The observations were Gaussian with =30. The Bayesian estimate are based on Monte Carlo Markov chain with K=1,000,000 steps. The results are given in Table <ref>.Clearly, the Bayesian estimate improves over the raw estimate of Y itself. The credible intervals cover the true value in all cases. However, in a similar example, where we increase M to 30 and have the real times being monotone but not evenly spaced,a clear bias is introduced by the “non-informative” prior as is demonstrated in Figure <ref>. §.§ Different layers We assume now that the observations came from G consecutive layers. Formally, let Y_gm be the mth observation from layer g, and assume as before that Y_gm, g=1,…,G, m=1,…,M_g are Gaussian independent random variables with a mean μ_gm and a standard deviation _gm respectively. The scientific assumption that the layers are ordered is translated to the formal assumption that for any k=1,…,M_g we have max_m⁡ μ_g-1,m<μ_g,k<min_m⁡μ_g+1,m. There are many estimators and approaches that can be used to analyze such data. The MLE is one of them. Another class of estimators is of Bayesian estimators. It is a class, as the estimator depends heavily on the prior, even if it looks, prima facia, as a non-informative prior. Suppose, for simplicity, that it is known that all of μ_gm are between t_s to t_e.One simple minded prior assumes thatμ_g,1,…,μ_g,M_g are independent and uniformly distributed between max_m⁡μ_g-1,m to min_m⁡μ_g+1,m.Another prior postulatesthat the transition time between the periods are taken from the uniform distribution on the interval between t_s to t_e. Given the boundaries, say τ_g and τ_g+1, the events μ_g,k are independent and have a given prior distribution on the interval (τ_g,τ_g+1), for example the uniform, or more generally, a scaled beta. These two priors seem to be quite similar, but in Figure <ref>a typical example is presented. Four layers with four observations eachwere sampled. This is a simulation, and therefore we know the ground truth. The μ's are known.Random observations were drawn, and the three suggested above estimators were calculated. The MLE is calculated using a modification of the polling adjacent violators algorithm, and the two Bayes estimators were calculated using MCMC Green algorithm. The horizontal lines denote the boundary points between the layers. The dashed lines are the true values while the solid lines are those estimated using the second Bayes procedure. The vertical lines denote the division to the four groups.It can be observedthat the second Bayes procedure is strictly biased (all simulations were essentially the same). The first Bayes estimator and the maximum likelihood estimator are very similar.The Bayesian approach is extremely convenient since it can handle, relatively easily, complex model, with many constraints on the parameters, and information about how one parameter is related to another. However, this comes with a price, since it is hard to foresee which assumptions may bias the conclusion. In the context of radiocarbon dating there is one particular complication that must be dealt with – the calibration curve. In the previous section we assumed that the event times were well ordered in the mean of the measurement. This is not a reasonable assumption. It makes sense to assume that they are ordered in real time, but the mean of the observations is not a monotone function of its occurrence time, but they are connected through the calibration curve. If the calibration curve was known exactly, this would be just a minor nuisance, since the MCMC method of the previous section can be easily adapted to change of time. More generally, we should add to this analysis the observations used to create the calibration curve, and try to model both of them together.§ A FEW WORDS ON THE CALIBRATION CURVE In our analysis so far we assumed that the the laboratory measures directly, albeit with error, the age of the finding, a seed, olive kernel or similar. However, this is not true. The laboratory presents its finding in an artificial date called Before Present (BP). The translation from the BP to the standard calendardate is done using the the calibration curve, which is based on 1̧4 dating of tree rings. One of the main difficulties with 1̧4 dating is that this calibration curve is non-linear, non-monotone, and measured with noise and smoothing (i.e., mostly on group of 10 years). For completeness, we discuss now a naively simple model for 1̧4 calibration.The BP age is an artificial construct representing the measured level activity of 1̧4 (Stuiver and Polach 1977; Bronk Ramsey 2008). In fact, the BP age is just a complicated way to express the fraction of 1̧4 in the sample and is given by =-8033 log ⁡f, where f is the fraction expected to be found in 1950 (the “present”) relative to the atmospheric concentration of 1̧4in 1950. It was the age of the sample if three conditions were met: (1) There were no measurement error; (2) The atmospheric concentration of 1̧4 at thetime of the sample was created was the same as it was in 1950; and (3) The half-life time of 1̧4was 5568 years (the Libby half-life).No one of these assumptions is correct. E.g., it is more likely that the half-life equals 5730 years (Godwin, 1962). The third problem is just a minor nuisance. The other two problems are real, not easy to correct and interplay. The concentration of 1̧4in the atmosphere is not stable due to a few processes(Bronk Ramsey 2008). 1̧4is generated in the upper atmosphere by nuclear reaction induced by cosmic rays, masked by the Earth's magnetic field. The first is more or less constant, but the latter is subject to temporal variation. The radiocarbon is removed from the atmosphere by the natural radioactive decay of the isotope as well as injection of “old” carbon from water and underground. Again the latter is subject to variation in time.The result is a time varying concentration. The calibration data enable us to get an approximation of the history of 1̧4reservoir. A good approximation is given byC(y) e^-(y-1950)/8267=C_0 e^-/8033,or[c14conc] C(y)/C_0 =e^(y-1950)/8267-/8033,where 8033 years is the Libby mean life and 8267 is the Cambridge mean life of 1̧4 , C_0 the fraction of 1̧4 at 1950, and C(y) at year y. In Figure <ref> a plot of this curve for six millennia is given. If we concentrate more on the relevant period to the Biblical times, we obtain Figure <ref>. The period between 850 to 250 BCE is interesting. It has two periods of very high level of 1̧4production, followed by periods of fast drop. We added to the graph two lines showing what would be the reduction of the 1̧4 atmospheric concentration ifthe only relevant active process after the peaks at 734 and 334 BCE would be the radioactive decay. In particular, there would be no atmospheric generation of new 1̧4atoms. As can be seen from the figure, the actual decay is even faster, as if no new 1̧4is generated and some old carbon is injected.Looking on equation (<ref>) differently, it gives the physical process that defines the calibration curve:=8033/8267 (y-1950)-log(C(y)/C_0 )≈8033/8267(y-1950)-C(y)/C_0 +.Thus the 1̧4concentration process is similar to the calibration process.It may be reasonable to assume that the concentration is a result of many different independent processes, neither of them dominates the rest, some of them were mentioned above, and different years would be independent. It will be convenient to take as a prior for the calibration line the assumption that it was generated by a Wiener process with a drift. It is a simple tool. Although the radioactive decay is not constant but proportional to the current concentration (an Ornstein-Uhlenbeck process), but since the half time life of 1̧4 is much longer than the time scales we consider in this paper, this effect can be ignored. Does the C(t)/C_0process really looks like a Brownian motion?The increments do not seem as having a stationary distribution, probably, the drift is not constant, and some fluctuations are larger than may be expected.Nevertheless, posingprior based on the assumption that C(y) is a Brownian process with a drift seems to be justified. Since the Brownian process is Markovian, the analysis of each century, say, is essentially independent and local, and the lack of stationarity of the underline concentration is not really an obstacle. But, is it really a robust assumption and does not bias the analysis as the Bayesians try to convince us? We test this using a simulated model of reality. Mine conclusion from this simulation is that the Bayesian analysis should be considered with care. Here are the details. I will use the following notation and assumptions:Let y_1,y_2,…,y_n be the years for which we have calibration data, and z_1,z_2,…,z_n be the calibration data. Each observation is a Gaussian random variables with mean β_i and variance ν_i. The vector β_1,β_2,…,β_n is a prioriassumed to be Gaussian, where the a priori mean of β_i is assumed to be γ_0+γ_1 y_i, (γ_0 and γ_1 known for simplicity), and the covariance of β_i and β_j is ^2min⁡{y_i,y_j}, ^2 unknown.Thus β_1,…,β_n are a realization of a Wiener process with a drift. Gaussian Bayesian models are convenient because the a posteriori distribution can be easily calculated. Since both the statistical model and the prior are Gaussian, the a posteriori distribution of β_1,…,β_n is Gaussian, and it is relatively simple to explicitly calculate it’s a posteriori mean — the Bayesian calibration curve, and the a posteriori variances and covariance of all pairs (β_i,β_j). However, Gaussian priors can be less naive and robust than one may assume, (Tuo and Wu 2016).I estimated the variance ^2 using empirical Bayes concepts (Casella 1985) by looking for the value of ^2 for which the sum of the a priori second moments equals to its a posteriori value. In Figure <ref> one simulation example is plotted, where the true calibration line wasβ_i=γ_0+γ_1 y_i+70 (y_i-y_1/y_n-y_1 )^3 sin⁡(y_i/20) .The “measurements” were taken every 10 years with a standard deviation of 21 years.I then calculated estimates and 90% confidence intervals for 5`new observations' measured each with a standard error of 20 years. For each observation the a posteriori density was calculated and the credible set was the set of years for which the density was above a threshold such that the total a posteriori probability was 0.9. The MLE was calculated and the frequentist confidence interval was the interval centered at the MLE and with length equals to the total length of the Bayesian credible set. Note that the frequentist confidence set is an interval, while typically the Bayesian credible set is not an interval but a union of a few intervals, and hence its range may be larger than the frequentist one.The estimates were simulated 5000 times (with a new set of calibration data every 20 simulations). The results are given in Table <ref>. The main conclusions from the table is that for the simulated modelthe Bayesian credible sets are too large, and the frequentist simpler model gives no worse coverage with shorter intervals. It could be argued that the over-coverage of the Bayesian credible set is because the `true' calibration curve does not fit the prior. This is not the case. One problem with Bayesian credible sets is that their coverage is as prescribed only `on the average', including the average over the possible states of Nature as prescribed by the prior. In other words, the average is done over what the researcher has imagined to be possible.Even in the stationary situation we consider, their coverage depends on the real date of the sample.Thus, their behavior is expected to be different for samples taken at different times as the following example shows. The true calibration curve was sampled now exactly from the prior. The Bayesian calculations were exact, but we obtained coverage which depends on the time. Again we conducted a Monte Carlo experiment in which one true calibration curve was sampled from the prior, and 5000 observations on the calibration curve and `archaeological' samples were taken.See Figure <ref> and Table <ref> for the results. When I sampled from a proper prior but with Gaussian process with variance changing in time (which fits the data better than stationary variance), the results were even more extreme.We concludefrom this analysis that the Bayesian analysis of the unique true calibration curve is not justified, and may lead to false credible sets. In Section <ref> the calibration curve for Tel Rehov, restricted to the period surrounding the 10th century BCE is considered. I use there a simpler approach, which is the one supported by the data and fits a standard good statistical practice.I turn now to analyze some data from Tel Rehov. In the discussion I avoid considering the scientific (i.e., archaeological) interpretation and argument, and assume the simple description of the data as given.The data set I analyzed is composed from 32 samples, some of them are measuredmore than once to make a total of 86 measurements.They are grouped into 4 groups, assumed to be consecutive: Stratum D-4 and D-3 (R4–R16), Stratum C-2 (R18–R20), Stratum C-1b (R24–R29), and Strata C-1a, B and E (R35–R43).[Omitted from this discussion are Samples R1-R3 from Stratum D-6, Sample R17, an outlier from Area B andSamples R21-R12 and R30-R34 from Building CG, where attribution to either Stratum C-1b or C-1a was not decisive.] The data are presented in Figure <ref>. Each sample is presented as a line centered at the mean of the measurements, and length which is 4 standard errors of the mean as reported by the laboratories (i.e., approximately the 95% confidence interval of the laboratory measurement). § DATA ANALYSIS: TEL REHOVWe move now to the analysis of the data from Tel Rehovassuming that they represent 4 consecutive perriods, and the real interest is in the transition times. We start with a short introduction to the data we use.Different researchersfaced the difficulty that the standard calibration lines of the tenth century BCE are not even monotone (Bruins et al. 2003; Mazar et al. 2005; van der Plicht and Bruins 2005; Sharon et al. 2007;Mazar and Streit 2016). In the following we argue why we believe that by common statistical practice we can assume that this is not the case. Archeological data is susceptible to statistical gross errors, for example, seedsthat were found in an anachronistic layer. I discuss in Section <ref>a standard approach for dealing with outliers.We then compare a frequentist and Bayesian analyses of the data. §.§ Backgraound The discussion of the absolute chronology of the Israeli Iron Age is of a special interest because of itsimportance to the interpretation of biblical and extra-biblical sources(Mazar 2005; Sharon et al. 2007).The center of this debate is whether the Iron Age I/IIA transition happened early or late in the tenth century BCE. The importance of this dating stems from two assumptions. First, the description of the United Monarchy of David and Solomon could fit thedestruction of the tribal society of Iron I by David, and the constructionof the Iron IIA sites by Solomon.Second, theinner biblical chronology dating with the existing extra biblical anchors, dates the Monarchy into the early tenth century, (Sharon et al. 2007). The conflict is between the proponents of the high chronology that puts the transition at the early tenth century and makes the Biblical story feasible,to those of believe in the low chronology that moves it up to the early ninth century contradicting the Bible (Mazar 2005; Sharon et al. 2007;Finkelstein 1995, 1996). The difficulty with this periods is that areas like Greece and Cyprus lakereal chronologicalanchors, and thus the chronology heavily dependeds on 1̧4 dating from the Israeli sites, (Boaretto et al. 2005).A key anchor is the invasion by Pharaoh Shoshenq I (Shishak) in 925 BCE. It is mentionedboth in Egyptian inscriptions and the Hebrew Bible. The list of places raided by Shoshenq I, mentioned at Karnak (Egypt), includes Rehov (Bruins et al. 2003). The Bible dates Shishak's invasion to 5 years after the death of Solomon.Tel Rehov is the largest mound in the Beth-Shean Valley, 6 km west of the Jordan River, 5 km south of Tel Beth-Shean. It includes layers from the Late Bronze Age to the Early Islamic period.Excavations of Tel Rehov were directed by Amihai Mazar of the Hebrew University of Jerusalem between the years 1997 and 2011,(Mazar 2013). Tel Rehov can be characterized by its dense stratigraphy of archeological levels from Iron-I and Iron-IIA, and by itself Tel Rehov provides the largest number of dates from any site in the Levant (Mazar and Streit 2016).Our interest would be only withthe Iron I and IIA findings, and unlike Sharon et al. (2007), I do not consider it in the wider scope of findings from these periods found in other Israeli sites.Several occupation phases were found from the Iron Age IB city (from the twelfth to the tenth centuries BCE), with different architectural characters. A massive building in stratum D-5 appears to be a storage building. Adjacent and later buildings in stratum D-4 were regular dwellings. Stratum D-3 is from the end of the period and includes more than 50 pits cut into the previous D-4 building and were probably used for food storage (Mazar2013and references therein).Area C is characterized by successive floor layers of an open area. The findings include Iron-I pottery and pottery from the Coastal Plain (Mazar2013and references therein).The main period exposed and studied is the Iron-IIA cty. Three general strata were, VI, V, and IV, were defined. A large number of radiocarbon dates of short-lived samples were measured, (Mazar2013)and references therein.I used the data given in Mazar and Streit (2016) and described there in full. 161 determination were divided intosamples: R1 to R48. For some of them several repetitions were measured. The findings were analyzed in different laboratories (Rehovot, Tuscon and Groningen) and some of it by Iron Age Dating Project (Sharon et al. 2007). The data set I analyzed is composed from 32 samples, and a total of 86 measurements.The data are grouped into 4 groups, assumed to be consecutive, see (Mazar 2005; Bruins et al. 2005; Mazar 2013; Mazar and Streit 2016).[Omitted from this discussion are Samples R1-R3 from Stratum D-6, Sample R17, an outlier from Area B,Samples R21 and R22 which are either from D-3 or D-2, R23 which is either from D-2 or D-1, and R30-R34 from Building CG, where attribution to either Stratum C-1b or C-1a was not decisive.]: * Stratum D-4 and D-3 (R4–R16). Area D includes layers from the Late Bronze Age to the Iron Age IIA. In Stratum D-4 a well defined Iron Age IB pottery was found. Floor surfaces were excavated in the street as well as a court yard. The floor surface was raised from time to time.* Stratum C-2 (or stratum VI, R18–R20). In Area C building destroyed with their content were found, including two layers of destruction layers. The three strata, VI–IV includes mudbricks building. Stratum VI includes at least four main units.* Stratum C-1b (stratum V, R24–R29). The stratum includes an apiary and its surroundings as well as several other structures. The buildings were build with wooden beams, and it was destroyed by a fierce fire.* Strata C-1a, B-5 and E-1 (strata V or IV, R35–R43). This period ended by another catastrophic event and abandonment. The finding includes characteristic pottery. The full set of observations is also presented in Figure <ref>. Each sample is presented as a line centered at the mean of the measurements, and length which is 4 standard errors of the mean as reported by the laboratories (i.e., approximately the 95% confidence interval of the laboratory measurement).§.§ The calibration line for the tenth century BCEThe raw dates in Figure <ref> are expressed in the Gregorian calendar years. Unlike the common practice I use a simple calibration formula Y_ce=2221.8-1.135*Y_bp, where Y_bp is the laboratory measurement in the standard `before present' units, and Y_ce is the year in the Gregorian calendar. This scaling is based on the linear regression approximation of the calibration graph around the time analyzed. I argue that this is the common statistical practice, which followsOccam's razor, since the data do not support a more complex calibration curve. The argument is based on Figure <ref>. In these graphsI present the regression of the data points of intCal13 (Reimer et al. 2013), in the relevant interval between -1150 to -800 BCE excluding two isolated outliers at 997 and 1109.5 BCE (which will be tuned down by any robust analysis, see Section <ref>.In particular, I would mention that the serial correlation of the residuals, 0.14, is not significantly different from 0(P-value of 0.11, one sided permutation test), and the P-value is 0.25 if we permute separately the two chronical halves of the data. If anything, there is a significant lab effect (of approximately 14 years).Thus the data do not reject the linear model with independent error in favor of the more general random walk model of the type considered in intCal13 (Blackwell and Buck 2008; Heaton et al. 2009; Niu et al. 2013;Reimer et al. 2013). Moreover, it is enough for the validity of the argument that the real relevant calibration curve (if there is one) is justmonotone. We should, however, emphsis: some authors worry that the curve is not mononote (Bruins et al. 2003; Mazar et al. 2005; van der Plicht and Bruins 2005; Sharon et al. 2007;Mazar and Streit 2016). For the methodological discussion of this paper, it is certainly enough that it simplifies the discussion without any serious impact on the claims made..§.§ Robust analysis of the data It seems obvious that there are outliers in the data. For example, R8 seems to be too new for the stratum, R27 and R36 are too old.Thus, R27 seems to be approximately 200 years or 13 standard errors before its period.Therefore, assuming a simple minded Gaussian model may be dangerous — a Gaussian modelputs too heavy weights on remote outliers.In line with the common statistical practice, I avoid the rejection of outliers, but do not ignore their presence (Huber 2011). The general estimator I consider in the following replaces the mean by the solution μ̂ of ∑_i=1^mψ_c(Y_i-μ̂)=0, assuming all standard errors are the same (see below for the generalization I use). If ψ_c(x)≡ x, then μ̂ is the mean. If ψ_c(x)=1 for x≥0 and -1 for x<0, then μ̂ is the median. More generally ψ_c(x)=x for |x|≤ c, ψ_c(x)=c if x>c and -c if x<-c. This estimator is the MLE if we assume that the density of the observations follows a density which is like a Gaussian in the center but with somewhat heavier exponential tails.Again, the extreme cases are the normal (c=) and the double exponential (c=0). The rationale of using an estimator based on such a generalization is based on the boundness of ψ_c — a large outlier has a bounded effect on the estimator. Huber (1964) proves that ψ_c gives an optimal protection against an adversary who can place a few of the observations everywhere he wants, but in a symmetric way around the true value. The effect of using ψ_c is that remote points are not removed but pulled closer to the center.§.§ The MLE The maximum likelihood estimation proceeds in two steps. In the outer loop, it places the boundary between the periods, and in the inner loop,it maximizes over the time value of the each sample given the boundaries of its period. Suppose the sample is composed of Y_gm1, g=1,…,G, m=1,…,M_g, i=1,…,I_gm of laboratory measurements, with standard errors _gm1,…,_gmI_gm respectively, where g denote the period (stratum), m the sample, and i the measurements.Let period g be with boundaries τ_g<τ_g+1.If the (unique solution) ofA_gm (μ)=∑_i=1^I_gm1/_gmiψ_c(Y_gmi-μ/_gmi )=0is between τ_g to τ_g+1 then this is the estimate μ_gm. If A_gm (τ_g+1 )>0 then the estimator is τ_g+1, and finally if A_gm (τ_g )<0 then the estimator is τ_g. Since A_gm (μ) is a decreasing function of μonly one of these can happen, and hence the estimator is well defined. Denote the estimator by μ̂_gm(τ).Finding the estimators μ̂_gm(τ), m=1,…,M_g, where M_g is the number of samples and τ=(τ_1,…,τ_G), we can proceed to the next stage. First the (profile pseudo) log-likelihood of the samples is calculated: l(τ_2,…,τ_G)= ∑_g=1^G ∑_m=1^M_g∑_i=1^I_gmρ_c(Y_gmi-μ̂_gm(τ)/_gm ),where ρ_c(x)=-log(f_c(c)) is the function whose derivative is ψ_c.Thenthe values of τ_2,…,τ_G that maximize l are found.The maximizing values of τ_1,…,τ_4 are given by the horizontal lines of Figure <ref>. The vertical broken lines denotes the 4 groups of observations. Thus, if the model is correct, the observations were supposed to be all in the gray areas. The stars denote the values of μ̂_s. Thus, if a star is on the boundary, the observation is truncated. If the full line of the observations is outside the gray area, this is an outlier.The profile likelihood of each pair of boundaries is given in Figure <ref>. These graphs describe the likelihood surface as function of the time of the two boundaries, after maximizing over all the other parameters (the other boundary and the sample times).The darker the color, the higher the likelihood. The vertical and horizontal lines denote the profile maximum likelihood estimators. §.§ Confidence sets and the bootstrap It may seem that Figure <ref> indicates that the transition between D-4 and D-3 to C-2 was around 932 BCE, the transition from C-2 to C-1b was soon after, and then the transition to E-1b and B-5 was around 906 BCE. However, these are just the maximizing values of of the likelihood function. There is, of course, a lot of noise and uncertainty in these estimators, and they depend too muchon the particular random values measured. To gauge how much uncertainty there is, I conductedbootstrap simulation studies. Aczel (1995) suggested the application of the bootstrap to archeological data.The logic of the bootstrap is that the error distribution is a smooth function of the true data distribution, hence one can evaluate the error in his inference by considering the error under distribution that is close to the data distribution. There are two main classes of the bootstrap used in practice andbelow. The nonparametric bootstrap in which one samples a new sample by sampling with replacement from the observed sample or some variation of this, and the parametric bootstrap in which one samples from an estimated parametric model. The bootstrap analysis is usually done under the assumption that the model is regular. However, we deal in this analysis with an irregular model, in particular the maximum of the likelihood is on the boundary of the parameter set (because of the order constraints).Yet, one can consider the bootstrap as a bagging procedure (Breiman 1996). One does not want that the scientific conclusions would depend on the existence or nonexistence of a particular finding, one out of the 31 found. The conclusion should be robust to small changes in the composition of the random items unearthed. Under the nonparametric bootstrap 5000 random sampled were drawn. Each time, I kept the number of sub-samples fixed at the value of 31. However I sampled without replacement from the 31 found sub-samples. If one of strata was empty at this step, which happened with probability of 0.04, I randomly added one of its sub-samples.Thus, on the average, each sub-sample appeared slightly more than once, but it could appear twice or none at all. For each bootstrap sample the maximum likelihood was calculated. The next step was locating the 4750 more likely of them. This was done by considering the 4750 points that fall inside the ellipsoid based on the principle components analysis (PCA) of the correlation matrix of the 5000 estimators (see below) and includes 95% of them. Similar results were obtained by other nonparametric bootstrap schemes (e.g., Poisson sampling of sub-samples, or keeping the strata sizes fixed).The nonparametric bootstrap is justified for simple random samples, which is not the case here. To see how it could go wrong, suppose we had only two periods. Suppose there are 30 sub-samples from the middle of the eleventh century BCE belonging to the first period, 30 sub-samples from the middle of the ninth belonging to the second period, and one from each period dated by the lab to the middle of the tenth century. In this case, the two sub-samples from the tenth centuryare the only ones that are informative, but from 60% of the bootstrap samples at least one of them will be missed. Thus, the bootstrap would give a misleading picture. This extreme situation is very different from that of Tel Rehov, but to be safe I used also the parametric scheme. Here we sampled observations from the normal distributions with standard distribution as given in the data and means as estimated by the MLE. Again, using PCA of the estimates, the central ellipsoid with 950 bootstrap values was found.Thepoints within this ellipsoid are marked on Figure <ref> as dots, while the circles are the remaining 50 points.Note that the ellipsoid is 3 dimensional, and hence in the two dimensional projections of the figure there seems to be some mix of the dots and circles.The meaning of this cloud of points is that any point within the ellipsoid gives a possible scenario, and the available data cannot differentiate between these. Three points were marked in Figure <ref>, which correspond to a high chronology, low chronology, and a chronology with a relative long C-2 period. These 3 points are well inside the above mentioned ellipsoid. The chronologies themselves are given in detail in Figure <ref>.The data seem to support these chronologies, and only scientific (i.e., archaeological) information external to these data can be used to decide between them. It is interesting to consider the principle component analysis (PCA) of the bootstrap estimate.The 3 new variables found based on the PCA and intervals that include 95% of their values were, essentially: (1) the length of the C-2 stratum (0 to 21.5 years). (2) The end point of C-1b (920 to 893 BCE), and, (3) the middle time of the C-2 stratum (950 to 922 BCE.).[The exact normalized eigenvectorsof the nonparametric bootstrap correlation matrix were:(0.71,-0.71,0.02), ( 0.03, 0.01,-1.00), and (0.71, 0.71,0.03), while those ofthe parametric bootstrap correlation matrix were: ( -0.70,0.70, -0.15), ( -0.15,0.06,0.99), and ( -0.70, -0.71, -0.07).]The first 1000 bootstrap realizations of these variables are given in Figure <ref>. One main conclusion from this analysis is that the start and end of C-1b can be analyzed independently. §.§ Bayesian analysis of Tel Rehov data.Bayesian analysis of these Tel Rehov data is simple. Its main strength is that it solves elegantly the problem of confidence sets. Credible sets for any parameter or group of parameters are automatic, intuitive and efficient on their own terms. Our prioris built in two stages. The first is the a priori assumption on the transition times. I take them to be uniformly distributed on their domain of definition (the cone of ordered three dimensional vectors). Given the transition times, I assume, as is quite natural, that the times of the samples are independent uniform on the interval of their definition. Thus the a posteriori distribution of the 3 boundaries is[pss] π(τ_1,τ_2,τ_3|data)=c∏_s=1^4 ∏_m=1^M_s ( 1/τ_s-τ_s+1 ∫_τ_s-1^τ_s ∏_i=1^I_sm e^ρ_c(Y_smi-t/_sm)dt)The normalizing constant is used to ensure that π(τ_1,τ_2,τ_3|data) integrates to 1. The results of the Tel Rehov data are presentedin Figure <ref>. The black boundaries in this figure denote the smallest sets with 0.95 a posterriori probability (i.e., they are sets defined byS_g (p)={τ_g,τ_g+1: π_post (τ_g,τ_g+1 )>p},where p is the solution of _S_g (p) π_post (s,t)ds dt=0.95.If we concentrate on the boundary between C-2 to C-1b, then its value in the bootstrap values within the 95% ellipsoid are in the range of 948 to 907 BCE, see Figure<ref>. The credible set is similar,although somewhat earlier and longer. The projection of the credible set in Figure <ref> is966 to 915 BCE. Thus, both analyses indicate that the transition islikely to be at the first part of the second half of the tenth century BCE, but it may that it was somewhat earlier. However, the right panels in figures<ref> and <ref> are very different. Thetransitions between C-2 to C-1b and C-1b to C-1a&E-1b&B5 seem to be almost independent by Figure <ref>, while they are far from that in Figure <ref>.In fact, by the Bayesian analysis C-1b is likely to be very short — the darkest area in the right panel of Figure <ref> is close to the boundary of zero length C-1b.The following simple toy example may explain this discrepancy. Suppose we had 4 observations belonging to 3 consecutive strata. The first strata has one observation at 980 BCE, the second has two observations equal to 980 and 920 BCE, and the third one observation at 920 BCE. The MLE is simple. The second strata is between 980 to 920 BCE. However, the Bayesian analysis is different. If we assume that the prior is uniform in the three strata, flat for the transition times, and the all period is between 1250 to 650 BCE, then we obtain that the most likely event is 0 length of the second strata. See Figure <ref>. More technically, the uniform prior is more informative than it seems, as it puts strong preference for short periods mainly because of the 1/τ_s-τ_s+1 factors in (<ref>). This can be corrected, for example by putting an informative prior on the transition. Bootstraping of the Bayesian model is too computer intensive — you need to calculate the a posteriori at any point for any bootstrap sample. Moreover, the bootstrap seems to be outside the basic philosophy of Bayesian analysis, in particular, if the prior was honest, and in our case the statistical model is not regular to begin with. I avoided, therefore, doing the bootstrap.The data I analyzed include a few sample points which seemingly should not influence the our understanding of the transition times. Here I refer to samples R4, R5, R6, R12, R14, R15, R37, and R39. They are too early or too late, to be informative on the start and end of their stratum respectively. Hence we may assume that taking them off, or adding similar data points would not change the analysis. Similarly, the assumptions on the starting time of group I and the end of group IV should not have an influence on the unrelated events (the end of group I and the beginning of group IV)—they are too far apart.If we look on the expression of the a posteriori, we obtain that this is not the case. The integrals over these points are indeed almost independent of τ_1,τ_2, and τ_3. However there is the factor 1/τ_s-τ_s+1 of (<ref>) discussed above. This factor is only due to the prior, and it favors short intervals.To make the situation extreme, I either multiplied each of these points(i.e., to each of the above samples one similar sample was added), or took it out altogether. Multiplying the observation favors short interval, and taking off the irrelevant observations makes long interval more likely. The results are presented in Figure <ref>.The situation is more extreme if we shorten the first period. In particular we tentatively assumed that the Terminus a quo of the first period is 1025 BCE. In Figure <ref> we test how under this condition the removal of above mentioned subsamples would influence the conclusion. The top part of the figure is the a posteriori with these data points and the bottom is without them. § SUMMARY AND CONCLUSION The main purpose of this paper was a critical presentation of the Bayesian analysis of archeological data. We did it through a theoretical discussion of toy models, and a concrete analysis of specific data set — the Iron I/IIA Age findings from Tel Rehov. Our conclusions from these two analyses is that the Bayesian approach is simple, intuitive, and natural. However, its conclusions depend on a prior, that theoretically is supposed to be just that, what the archaeologist thought a priori about the values of the many different parameters of the problem, but de facto it is what the programmer implemented into the off the shelf program. A miscalculated prior can bias the analyze considerably.I believe that robust priors — priors that do not biased the analysis — are conceptually inconsistent, and practically impossible for complex high dimensional and irregular models. The latter are the type of models that should be used for the analysis of Iron Age of the Levant.The analysis was done onrestricted data, only those from Tel Rehov, and concrete conclusions for the scientific analysis of the period are beyond the scope of this paper. However, I believe that the range of the interpretation of the analyzed data waspresented in Figure <ref>. Four consecutive strata were investigated. The first transit time seems to be any time in the third quarter of the tenth century BCE, while the third was either in the last quarter of the tenth century or the first of the ninth century BCE. The second stratum could have length from 0 to 20 years, the second and third strata could span together between twenty five to more than fifty years. § ACKNOWLEDGMENT I would like to thank Prof. Amihai Mazar who introduces me to this exciting field and its challenges, and supply me with the Tel Rehov data. I would also want to acknowledge the contribution of an enlightening conversation with Ilan Sharon. This research was partially supported by ISF grant 1770/15. § REFERENCES#1#1 #1#1* Aczel AD 1995. Improved radiocarbon age estimating using the bootstrap.Radiocarbon 37:845–849.* Abramovich F,Ritov Y2013. Statistical Theory: A Concise Introduction. Boca Raton, London, New York: CRC Press, * Breiman L 1996. Bagging Predictors. Machine Learning, 24:123–140.* Bayliss A, Bronk Ramsey C 2004. Pragmatic Bayesians: a decade of integrating radiocarbon dates into chronological models. In Buck CE and Millard AR editors. Constructing Chronologies: Crossing Disciplinary Boundaries, Lecture Notes in Statistics,177. London Springer-Verlagp 25–41.* Berger JO1985. Statistical Decision Theory and Bayesian Analysis. New York Springer.* Berger JO,Berliner LM1986. Robust Bayes and Empirical Bayes Analysis with -Contaminated Priors. Ann. Statist. 14:461–486.* Bickel PJ, Doksum KA 2006. Mathematical Statistics, Basic Ideas and Selected Topics, Vol. 1, (2nd Edition).Upper Saddle River Prentice-Hall.* Blackwell PG,Buck CE 2008. Estimating radiocarbon calibration curves. Bayesian Analysis 3:225–48.* Boaretto1 E, Timothy JAJ,Gilboa A, Sharon I2005. Dating the Iron Age I/II transition in Israel: first inter-comparison. Radiocarbon,47:39–-55.* Bronk Ramsey C 2008. Radiocarbon Dating: Revolutions In Understanding. Archaeometry 50:249–275.* Bronk Ramsey C 2009. Bayesian analysis of radiocarbon dates, Radiocarbon, 51:337–360.* Bronk Ramsey C, Lee S 2013. Recent and planned developments of the program OxCal, Radiocarbon, 55:3–4.* Bruins Hendrik J, van der Plicht J,MazarA 2003. 1̧4Dates from Tel Rehov: Iron-Age Chronology, Pharaohs, and Hebrew Kings. Science 300:315–318.* BruinsHJ,van der Plicht J,Mazar A, Bronk Ramsey C, Manning SW2005. The Groningen Radiocarbon Series from Tel Rehov— OxCal Bayesian computations for the Iron IB– IIA Boundary and Iron IIA destruction events In Levy TEandHighan T, editors. The Bible and Radiocarbon Dating. London and Oakville Equinox.* Buck CE, Christen JA,James GN1999. BCal: an on-line Bayesian radiocarbon calibration tool, Internet Archaeology 7.* Casella G1985. An Introduction to Empirical Bayes Data Analysis. The American Statistician 39:83–87.* Finkelstein I 1995. The Date of the Settlement of the Philistines in Canaan. Tel Aviv 22:213–39. Finkelstein I 1996. The Archaeology of the United Monarchy: An Alternative View. Levant 28:177–87.* Gelman A, Carlin JB, Stern HS, RubinD B 2014. Bayesian data analysis, volume 2. , New York Chapman and Hall.* Geman S,and Geman D1984. Stochastic relaxation, Gibbs distributions, and the bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence 6:721–741.* Godwin H1962. Half-life of Radiocarbon. Nature 195:984.* Hastings WK 1970. Monte carlo sampling methods using Markov chains and their applications. Biometrika 57:97–109.* Heaton TJ, Blackwell PG, Buck CE 2009. A Bayesian approach to the estimation of radiocarbon calibration curves: the IntCal09 methodology. Radiocarbon 51:1151–64.* Huber PJ1964. Robust Estimation of a Location Parameter. The Annals of Mathematical Statistics 35:73–101.* Huber PJ2011. Robust statistics. New York Springer.* Lee S , Bronk Ramsey C,Mazar A2013. Iron Age Chronology in Israel: Results from Modeling with a Trapezoidal Bayesian Framework. Radiocarbon 55:731–740.* Manning SW2001. The absolute chronology of the Aegean early bronze age. Sheffield Sheffield Academic Press.* Mazar A 2005. The debate over the chronology of the Iron Age in the southern Levant. InLevy TE, Highan T editors. The Bible and Radiocarbon Dating.London and OakvilleEquinox.* Mazar, A 2013. Rehob. In Master D.M.,Nakhai B.Alpert, Faust A, White L.M., and Zangeberg J.K.editors. The Oxford Encyclopedia of Bible and Archaeology vol. 1., Oxford University Press, New York.​* Mazar A, Bruins HJ, Panitz-Cohen N, van der Plicht J. 2005. Ladder of time at Tel Rehov: stratigraphy, archaeological context, pottery and radiocarbon dates. In Levy T, Higham T editors. The Bible and Radiocarbon Dating: Archaeology, Text and Science. London Equinox p 195–255* Mazar Aand Streit K2016. Radiometric dates from Tel Reḥov (forthcoming).* Niu M, Heaton TJ, Blackwell PG, Buck CE 2013. The Bayesian approach to radiocarbon calibration curve estimation: the IntCal13, Marine13, and SHCal13 methodologies. Radiocarbon 55.* Press SJ2003. Subjective and Objective Bayesian Statistics 2nd edition. Hoboken Wiley.* Reimer PJ, Bard E, Bayliss A, Beck JW, Blackwell PG,Bronk Ramsey C, Buck CE, Cheng H, Edwards RL, Friedrich M, Grootes PM, Guilderson TP, Haflidason H, Hajdas I, Hatté C, Heaton TJ, Hoffman DL, Hogg AG, Hughen KA, Kaiser KF, Kromer B, Manning SW, Niu M, Reimer RW, Richards DA, Scott EM, Southon JR, Staff RA, Turney CSM, van der Plicht J (2013). IntCal13 and Marine13 radiocarbon age calibration curves 0–50,000 years cal BP. Radiocarbon 55.* Ritov Y, Bickel PJ, GamstAC,Kleijn BJK2014. The Bayesian Analysis of Complex, High-Dimensional Models: Can It Be CODA? Statist. Sci. 29:619–639.* Robert CP2007. The Bayesian Choice From Decision-Theoretic Foundations to Computational Implementation. New York Springer.* Sharon I,Gilboa A, Jull AJT,Boaretto E 2007. Report on the first stage of the iron age dating project in Israel: supporting a low chronology, Radiocarbon 49:1–46.* Steier Pand Rom W2000. The Use of Bayesian Statistics for 1̧4Dates of Chronologically Ordered Samples: A Critical Analysis. Radiocarbon,42:183-–198.* Stuiver Mand Polach HA 1977. Discussion: reporting of 1̧4data. Radiocarbon, 19:355–363.* Tuo R,Wu CFJ 2015. Efficient calibration for imperfect computer models. Annals of Statistics, to appear.* van der Plicht J,Bruins HJ 2005. Quality control of Groningen 1̧4 results from Tel Rehov. In Levy Thomas E.and Highan Thomas editors. The Bible and Radiocarbon Dating.London and Oakville Equinox.
http://arxiv.org/abs/1704.08479v1
{ "authors": [ "Ya'acov Ritov" ], "categories": [ "stat.AP" ], "primary_category": "stat.AP", "published": "20170427084550", "title": "The utility of a Bayesian analysis of complex models and the study of archeological ${}^{14}$C data" }
http://arxiv.org/abs/1704.08948v1
{ "authors": [ "Shahrdad G. Sajjadi", "Harihar Khanal" ], "categories": [ "nlin.PS", "physics.flu-dyn" ], "primary_category": "nlin.PS", "published": "20170426234950", "title": "Interaction of Tollmien-Schlichting Waves in the Air with the Sea Surface" }
High harmonic imaging of ultrafast many-body dynamics in strongly correlated systems M. Ivanov December 30, 2023 ==================================================================================== We provide a robust and general algorithm for computing distribution functions associated to induced orthogonal polynomial measures. We leverage several tools for orthogonal polynomials to provide a spectrally-accurate method for a broad class of measures, which is stable for polynomial degrees up to at least degree 1000. Paired with other standard tools such as a numerical root-finding algorithm and inverse transform sampling, this provides a methodology for generating random samples from an induced orthogonal polynomial measure. Generating samples from this measure is one ingredient in optimal numerical methods for certain types of multivariate polynomial approximation. For example, sampling from induced distributions for weighted discrete least-squares approximation has recently been shown to yield convergence guarantees with a minimal number of samples. We also provide publicly-available code that implements the algorithms in this paper for sampling from induced distributions. § INTRODUCTION Let μ be a probability measure onsuch that a family of L^2_μ-orthonormal polynomials {p_n}_n=0^∞ can be defined.[See Section <ref> for technical conditions implying this.] The non-decreasing functionF_n(x)= ∫_-∞^x p_n^2(t) μ(t), x∈.is a probability distribution function onsince p^2_n has unit μ-integral over . This paper is chiefly concerned with developing algorithms for drawing random samples of a random variable whose cumulative distribution function is F_n. The high-level algorithmic idea is straightforward: develop robust algorithms for evaluating F_n, and subsequently use a standard root-finding approach to compute F_n^-1(U) where U is a continuous uniform random variable on [0,1]. (This is colloquially called “inverse transform sampling".) The challenge that this paper addresses is in the computational evaluation of F_n(x) for any n ∈_0 and for relatively general μ. Borrowing terminology from <cit.>, we call F_n the order-n distribution induced by μ. In our algorithmic development, we focus on three classes of continuous measures μ from which induced distributions spring: (1) Jacobi distributions on [-1,1], (2) Freud (i.e., exponential) distributions on , and (3) “Half-line" Freud distributions on [0, ∞). These measures encompass a relatively broad selection of continuous measures μ on .The utility of sampling from univariate induced distributions has recently come into light: The authors in various papers <cit.> note that additive mixtures of induced distributions are optimal sampling distributions for constructing multivariate polynomial approximations of functions using weighted discrete least-squares from independent and identically-distributed random samples. “Optimal" means that these distributions define a sampling strategy which provides stability and accuracy guarantees with a sample complexity that is currently thought to be the best (smallest). This distribution also arises in related settings <cit.>. The ability to sample from an induced distribution, which this paper addresses, therefore has significant importance for multivariate applied approximation problems. Induced distributions can also help provide insight for more theoretical problems. The weighted pluripotential equilibrium measure is a multivariate probability measure that describes asymptotic distributions of optimal sampling points <cit.>. However, an explicit form for this measure is not known in general. The authors in <cit.> make conjectures about the Lebesgue density associated to equilibrium measure in one case when its explicit form is currently unknown. While these conjectures remain unproven, univariate induced distributions can be used to simulate samples from the equilibrium measure. Hence, sampling from induced distributions can be used to provide supporting evidence for the theoretical conjectures in <cit.>. The outline of this paper is as follows: in Section <ref> we review many standard properties of general orthogonal polynomial systems that are exploited for computing induced distributions. Section <ref> contains a detailed discussion of our novel approach for computing F_n(x) for three classes of measures; this section also utilizes potential theory results in order to approximate F_n^-1(0.5). Section <ref> uses the previous section's algorithms in order to formulate an algorithmic strategy for computing F_n^-1(u), u ∈ [0, 1]. Finally, Section <ref> discusses the above-mentioned applications of multivariate polynomial approximation using discrete least-squares, and investigating conjectures for a weighted equilibrium measure. Code that reproduces many of the plots in this paper is available for download <cit.>. The code contains routines for accomplishing almost all of the procedures in this paper including evaluation and inversion of induced distributions (for many of the distributions in Table <ref>), inverse transform sampling for multivariate sampling from additive mixtures of induced distributions, and fast versions of all codes that utilize approximate monotone spline interpolants for fast evaluation and inversion of distribution functions. The code also contains routines that reproduce Figure <ref> (left, center), Figure <ref> (right), Figure <ref> (right), Figure <ref> (left), Figure <ref> (left), and Figure <ref> (left).§.§ A simple example The main algorithmic novelties of this paper revolve around evaluation of F_n(x).In Figure <ref> we show one example of the integrand p_n^2(x) μ(x) and the associated F_n. One suspects that packaged integration routines should be able to perform relatively well in order to compute integrals for such a problem. In our experience this is frequently true, but comes at a price of increased computational effort and time, and decreased robustness. The right-hand pane in Figure <ref> shows timings for Matlab's built-inroutine versus the algorithms developed in this paper. We see that the algorithms in this paper are much faster, usually resulting in around an order of magnitude savings.Our experimentation (using Matlab's ) also reveals the following advantages of using the specialized algorithm in this paper: * The results from our algorithm appear to be more accurate compared to standard routines, and use significantly less computation. This conclusion is based on our testing with , and is true even if one modifies algorithm tolerances in .* We have occasionally observedreturn non-monotonic evaluations for the distribution F_n. This typically happens when most of the mass of the integrand is concentrated far from the boundary of the support of μ. Non-monotonic behavior causes problems in performing inverse transform sampling.* When μ(x) has a singularity at boundaries of the support of μ, thenfrequently complains about singularities and failure to achieve error tolerances. §.§ Historical discussionThe distribution function of the arcsine or “Chebyshev" measure isF(x)= 1/π∫_-1^x 1/√(1 - t^2)t = 1/2 + 1/πarcsin(x), x∈ [-1,1].It was shown in <cit.> that if μ belongs to the Nevai class of measures, then F_n(x) → F(x) pointwise for all x ∈ [-1,1]. Further refinements on this statement were made in <cit.> which generalized the class of measures for which convergence holds. Generating polynomials orthogonal with respect to the measure associated to F_n is considered in <cit.> with a generalization given in <cit.>. The authors in <cit.> proposed sampling from an additive mixture of induced distributions using a Markov Chain Monte Carlo method for the purposes of computing polynomial approximations of functions via discrete least-squares; the work in <cit.> investigates sampling from the n-asymptotic limit of these additive mixtures. The authors in <cit.> leverage the additive mixture property to sample from this distribution using a monotone spline interpolant. At the very least this latter method requires multiple evaluations of F_n(x). To our knowledge there has been essentially no investigation into robust algorithms for the evaluation of F_n for broad classes of measures, which is the subject of this paper. § BACKGROUND§.§ Orthogonal polynomialsThis section contains classical knowledge, most of which is available from any seminal reference on orthogonal polynomials <cit.>.Let μ be a Lebesgue-Stiltjies probability measure on , i.e., the distribution function F(x) = ∫_-∞^x μ(t),is non-decreasing and right-continuous on , with F(-∞) = 0 and F(∞) = 1. For any distribution F we use the notation F^c(x)1 - F(x) for its complementary function.We assume that μ has in infinite number of points of increase, and has finite polynomial moments of all orders, i.e.,|∫_ x^n μ(x)|< ∞, n = 0, 1, ….Under these assumptions, a sequence of orthonormal polynomials {p_n }_n=0^∞ exists, with p_j = j, satisfying∫_ p_j(x) p_k(x) μ(x) = δ_k,j,where δ_k,j is the Kronecker delta function. We will write p_n(·) = p_n(·; μ) to denote explicit dependence of p_n on μ when necessary. Such a family can be mechanically generated by iterative application of a three-term recurrence relation:x p_n(x) = √(b_n) p_n-1(x) + a_n p_n(x) + √(b_n+1) p_n+1(x),where the recurrence coefficients a_n and b_n are functions of the moments of μ. The initial conditions p_-1≡ 0 and p_0 ≡ 1 are used to seed the recurrence. With p_n defined in this way, the (positive) leading coefficient of p_n has valueγ_n∏_j=0^n 1/√(b_j), p_n(x)= γ_n x^n + ⋯.The polynomial p_n has n real-valued, distinct roots lying inside the support of μ, and these roots {x_k}_k=1^n are nodes for the Gaussian quadrature rule,∫_ f(x) μ(x)= ∑_k=1^n w_k f(x_k), f∈span{ 1, x, x^2, …, x^2n-1},where the weights w_k are unique and positive. These nodes and weights can be computed having knowledge only of the recurrence coefficients a_k and b_k; numerous modern algorithms accomplish this, with a historically significant procedure given in <cit.>. §.§ Induced orthogonal polynomials and measuresWith p_n the orthonormal polynomial family with respect to μ, the collection of polynomials orthogonal with respect to the weighted distribution p_n^2(x) μ(x) with n fixed are called induced orthogonal polynomials. We adopt this terminology from <cit.>.Define the Lebesgue-Stiltjies measure μ_n and its associated distribution function F_n byF_n(x; μ) = F_n(x) ∫_-∞^x μ_n(t) ∫_-∞^x p_n^2(t) μ(t),with p_n(·) = p_n(·; μ) the orthonormal family for μ. Note that μ_n ≪μ, and F_n(∞) = μ_n() = 1 so that μ_n is also a probability measure. The measure μ_n has its own three-term recurrence coefficients a_j,n and b_j,n for j=0, …, that define a new set of L^2_μ_n()-orthonormal polynomials, which can be generated through the corresponding version of the mechanical procedure (<ref>). One such procedure for generating these coefficients is given in <cit.>.We will call the measure μ_n the (order-n) induced measure for μ, and F_n the corresponding (order-n) induced distribution function. Our main computational goal is, given u ∈ [0,1], the evaluation of F_n^-1(u) for various measures μ. The overall algorithm for accomplishing this is a root-finding method, e.g., bisection or Newton's method. Thus, the goal of finding F^-1_n(u) also involves the evaluation of F_n(x), which is the focus of Section <ref>. A good root-finding algorithm also requires a reasonable initial guess for the solution. This initial guess is provided by the methodology in Section <ref>.§.§ Measure modificationsOur algorithms rely on the ability to compute polynomial measure modifications. That is, given the three-term coefficients a_n and b_n for μ, to compute the coefficients a_n and b_n for μ defined asμ(x) = p(x) μ(x),where p(x) is a polynomial, non-negative on the support of μ. This is a well-studied problem <cit.>. In particular, one may reduce the problem to iterating over modifications by linear and quadratic polynomials.We describe in detail how to accomplish linear and quadratic modifications in the appendix, with the particular goal of structuring computations to avoid numerical under- and over-flow when n is large.The following computational tasks described in the Appendix are used to accomplish measure modifications. * (Appendix <ref>) Evaluation of r_n, the ratio of successive polynomials in the orthogonal sequence:r_j(x)p_j(x)/p_j-1(x).Above, we require x to lie outside the zero set of p_j-1.* (Appendix <ref>) Evaluation of a normalized or weighted degree-n polynomial:C_n(x)p_n(x)/√(∑_j=0^n-1 p_j^2(x)), n> 0, x∈Note that C_n(x)/r_n(x) ∼ 1 for large enough |x|.* (Appendix <ref>) Polynomial measure modifications: given μ and its associated three-term recurrence coefficients a_n and b_n, computation of the three-term recurrence coefficients associated with the measures μ and μ, defined as μ(x)= ±(x - y_0 ) μ(x), y_0∉supp μ μ(x)= (x - z_0 )^2 μ(x), z_0∈The ± sign in μ is chosen so that μ is a positive measure. § EVALUATION OF F_NThis section develops computational algorithms for the evaluation of the induced distribution F_n defined in (<ref>). These algorithms depend fairly heavily on the form of a Lebesgue density μ(x) (i.e., a positive weight function) for the measure μ on , but the ideas can be generalized to various measures. We consider the classes of weights enumerated in Table <ref>: * (Jacobi weights) μ_J(x) = (1-x)^α (1+x)^β for x ∈ [-1,1] with parameters α, β > -1.* (Freud weights) μ_F(x) = |x|^ρexp(-|x|^α) for x ∈ with parameters α > 0, ρ >-1.* (half-line Freud weights) μ_HF(x) = x^ρexp(-x^α) for x ∈ [0, ∞) with parameters α > 0, ρ >-1.The induced distribution F_n for Freud weights can actually be written explicitly in terms of the corresponding induced distribution for half-line Freud weights, so most of the algorithm development concentrates on the Jacobi and half-line Freud cases. The strategy for these two latter cases is essentially the same: with μ in one of the classes above and n fixed, we divide the computation into one of two approximations, depending on the value of x. Each approximation is accurate for its corresponding values of x. Formally, the algorithm is F_n(x)= {[ F_n(x), x ≤ x_0(μ, n); 1 - F_n^c(x), x > x_0(μ, n) ].where F_n(x) and F^c_n(x) represent computational approximations to F_n(x) and F^c_n(x), respectively, and are the outputs from the algorithms that we will develop. A pictorial description of this is given in Figure <ref> (left). The constant x_0 is ideally F_n^-1(0.5); since we cannot know this value a priori, we use potential-theoretic arguments to compute a value x_0 = x_0(μ, n) approximating the median of F_n,F_n(x_0) ≈1/2.For our choice of x_0, we can provide no estimates for this approximate equality, but empirical evidence in Figure <ref> (right) shows that our choices are very close to the real median, uniformly in n. The coming sections concentrate on, for each class of μ mentioned above, specifying x_0 and detailing algorithms for F_n and F_n^c. §.§ Jacobi weightsWe consider computing induced distribution functions for Jacobi measures μ^(α,β)_J as defined in Table <ref>. When circumstances are clear, we will write μ_J^(α,β) = μ to avoid notational clutter. We seek the distribution function of the induced measure, μ_J,n^(α,β)( [-1, x] ) = μ_n ( [-1, x] ) = F_n(x)∫_-1^x p^2_n(t) μ(t).To compute F_n, we specify an approximate median x_0 = x_0(μ, n) satisfying (<ref>), and construct algorithmic procedures to evaluate F_n(x) ≈ F_n(x) (for x ≤ x_0) and F_n^c(x) ≈ F_n^c(x) (for x > x_0). Having specified these, we use (<ref>) to compute our approximation to F_n.§.§.§ Approximating x_0(n) With α, β > -1 and n ∈ fixed, consider the measure μ_J^(α/2n, β/2n). Note that this measure is still a Jacobi measure since α/2 n > -1 and β/2 n > -1. We may rewrite the integrand in (<ref>)p_n^2(t) μ_J^(α,β)(t) = [ p_n(t) (μ_J^(α/2n, β/2n)(t))^n]^2.The quantity under the square brackets on the right-hand side is, in the language of potential theory, a weighed polynomial of degree n. One result in potential theory characterizes the “essential" support of this weighted polynomial; in particular, the weighted polynomial decays quickly outside this support. The essential support is an interval, and we take the median x_0 of the induced measure μ_n to be the centroid of this interval.The essential support of the weighted polynomial above is demarcated by the Mhaskar-Rakhmanov-Saff numbers for the asymmetric weight μ_J^(α/2n, β/2n) on [-1,1]. These numbers for this weight are computed explicitly in <cit.>. When α, β≥ 0, this support interval is [ θ - Δ, θ + Δ], withθ = β^2 - α^2/(2 n + α + β)^2,Δ = 4 √(n(n + α + β)(n + α) (n + β))/(2 n + α + β)^2 Since this is the interval where most of the “mass" of the integral in (<ref>) lies, we set x_0 to be the centroid θ of this interval:x_0(μ^(α,β),n)= β^2 - α^2/(2 n + α + β)^2,α, β > -1, n > 0.Note that the definitions of θ, Δ require α and β to be non-negative. Without mathematical justification, we extend the formula (<ref>) to valid negative values of α, β as well. Figure <ref> compares x_0 and F_n^-1(0.5) for certain choices of α and β.§.§.§ Computing F_n(x)First assume that x ≤ x_0, with x_0 defined in (<ref>), and define A ∈_0 as A⌊ |α| ⌋,α - A∈ (-1,1),where ⌊·⌋ is the floor function. We transform the integral (<ref>) over [-1,x] onto the standard interval [-1,1] via the substitution u = 2/x+1 (t+1) - 1:F_n(x)= 1/c_J^(α,β)∫_-1^x p_n^2(t) (1 - t)^α (1 + t)^βt= (x+1/2)^β+1c_J^(0,β)/c_J^(α,β)∫_-1^1 (2 - 1/2(u+1)(x+1))^α-A U_2n+A(u) μ^(0,β)(u),where U_2n+A(u) is a degree-(2n+A) polynomial given by U_2n+A(u)= p_n^2( 1/2(u+1)(x+1) - 1) (2 - 1/2(u+1)(x+1))^A = γ^2_n (x+1/2)^2n+A∏_k=1^A [( 3-x/1+x) - u ]_(a)∏_j=1^n (u - ( 2/x+1( x_j,n + 1 ) - 1) )^2_(b),where {x_j,n}_j=1^n are the n zeros of p_n(·). Since we explicitly know the polynomial roots of U_2n+A, we can absorb the term marked (a) into the measure μ^(0,β)(u) via A linear modifications, and we can likewise absorb the term (b) via n quadratic modifications. (See Appendix <ref>.) Thus, defineμ_n(u) = U_2n+A(u) μ^(0,β)(u),which is a modified measure whose recurrence coefficients can be computed via successive application of the linear and quadratic modification methods in Appendix <ref>. Thus,F_n(x) = (x+1/2)^β+1c_J^(0,β)/c_J^(α,β)∫_-1^1 (2 - 1/2(u+1)(x+1))^α-Aμ_n(u).The integrand above has a root (α > 0) or singularity (α < 0) at u = 3-x/x+1 = 1 + 2 1-x/1+x≥ 1 + 2 1-x_0/1+x_0; this root is far outside the interval [-1,1] unless β is very large and both n and α are small. The integrand is therefore a positive, monotonic, smooth function on [-1,1], taking values between 1-x and 2; we use an order-M μ_n-Gaussian quadrature to efficiently evaluate it. With (u_m, w_m)_m=1^M denoting the nodes and weights, respectively, of this quadrature rule, we computeI(x)∑_m=1^M w_m (2 - 1/2(u_m+1)(x+1))^α-A,F_n(x)= (x+1/2)^β+1I(x)/2^a (β + 1) B(β+1, α+1)The entire procedure is summarized in Algorithm <ref>.In order to compute F^c_n(x) for x > x_0, we use symmetry. Since x-x interchanges the parameters α and β, thenF_n^(α,β),c(x) = ∫_x^1 [ p_n^(α,β)(t) ]^2 μ^(α,β)(t) = ∫_-1^-x[ p_n^(β,α)(t)]^2 μ^(β,α)(t) = F_n^(β,α)(-x).Note that if x > x_0(μ^(α, β), n), then -x < x_0 (μ^(β, α), n). Thus, F_n^c can be computed via the same algorithm for F_n, but with different values for α, β, and x. This is also shown in Algorithm <ref>. [H] InputinputOutputoutput α, β > -1: Jacobi polynomial parameters n∈_0 and x ∈ [-1,1]: Order of induced polynomial and measure μ_n and value x. M ∈: Quadrature order for approximate computation of F_n(x). F_n(x; μ^(α, β)) If x > x_0(μ(α,β), n), return 1 - F_n(-x; μ^(β, α))Compute n zeros, {x_j,n}_j=1^n of p_n = p_n(·; μ^(α,β)), and leading coefficient γ_n of p_n.Compute recurrence coefficients a_j and b_j associated to μ^(0,β) for 0 ≤ j ≤ M + A + 2 n.j=1, …, n Quadratic measure modification (<ref>): update a_n, b_n for n=0, …, M + A + 2(n-j) with modification factor (u - ( 2/x+1(x_j,n + 1) - 1))^2.k=1, …, A Use linear modification (<ref>) to update a_n, b_n for n=0, …, M + (A-k) with modification factor (u - (3-x/1+x)).b_0b_0 (x+1/2)^2n + Aγ_n^2Compute M-point Gauss quadrature (u_m, w_m)_m=1^M associated with measure μ_n via {(a_j, b_j)}_j=0^M.Compute the integral I in (<ref>), and return F_n(x; μ^(α,β)) given by (<ref>). Computation of F_n(x), approximating F_n(x) for μ corresponding to a Jacobi polynomial measure.With μ^(α,β), n ∈, and x ∈ [-1,1] all given, assume that x ≤ x_0 with x_0 as in (<ref>). Then the output F_n(x) from Algorithm <ref> using an M-point quadrature rule satisfies| F_n(x) - F_n(x) | ≤ C(α,β,n,M) ∏_j=0^M b_j(μ_n),where b_j(μ_n) are the b_j three-term recurrence coefficients associated to the x-dependent measure μ_n defined in (<ref>). The constant C isC(α,β,n,M) = 2^β+1-A/(β+1) B(β+1,α+1)( x_0(n) + 1/4)^2 M+β+1 Note that C(α,β,n,M) on the right-hand side of (<ref>) is explicitly computable once α, β, M, and n are fixed, and only the product involving b_j(μ_n) depends on x. Furthermore, this last quantity is explicitly computed in Algorithm <ref>, so that a rigorous error estimate for the algorithm can be computed before its termination.Since x_0+1/4 < 1/2, then the estimate (<ref>) also hints at exponential convergence of the quadrature rule strategy, assuming that the factors b_j(μ_n) can be bounded or controlled. We cannot provide these bounds, although we do know the asymptotic behavior b_j(μ_n) →1/4 as j →∞ <cit.>. Also, since x_0 ∈ [-1,1] for any n > 0 then, uniformly in n > 0, C(α, β, n, M) ≤ C'(α, β) 4^-M.Thus, for a fixed x we expect that the estimate in Theorem <ref> behaves like| F_n(x) - F_n(x) | ≤ C(α,β,n,M) ∏_j=0^M b_j(μ_n) ≲ C”(α, β, x) 4^-2M,showing exponential convergence with respect to M. However, we cannot prove this latter statement. The result is a relatively straightforward application of known error estimates for Gaussian quadrature with respect to non-classical weights. We use the notation of Algorithm <ref>: u_m and w_m denote the M-point μ_n-Gaussian quadrature nodes and weights, respectively. We start with the Corollary to Theorem 1.48 in <cit.>, stating that if f(·) is infinitely differentiable on [-1,1], then | ∫_-1^1 f(u) μ_n(u) - ∑_j=1^M w_m f(u_m) | = f^(2M)(τ)/(2 M)!∫_-1^1 ∏_j=1^M ( u - u_m)^2 μ_n(u)for some τ∈ (-1,1). Noting that since u_m are the zeros of the degree-M μ_n-orthogonal polynomial, then ∫_-1^1 ∏_j=1^M ( u - u_m)^2 μ_n(u) = ∏_j=0^M b_j(μ_n) ∫_-1^1 p^2_M(u; μ_n) μ_n(u) = ∏_j=0^M b_j (μ_n).From (<ref>), the integral we wish to approximate has integrand f(u) = (2 - 1/2 (u+1)(x+1))^α-A. Then for any τ∈ (-1,1) and any x ≤ x_0,| f^(2M)(τ) |= (x+1/2)^2 M(2 - 1/2 (x+1)(τ+1) )^α - A - 2 M∏_j=0^2M-1 |α - A - j| ≤(x_0+1/2)^2 M 2^α - A - 2 M∏_j=0^2M-1 |α - A - j| = 2^α-A(x_0+1/4)^2M∏_j=0^2M-1 |α - A - j|Then| F_n(x) - F_n(x) |= (x+1/2)^β+1c_J^(0,β)/c_J^(α,β)| ∫_-1^1 f(u) μ_n(u) - ∑_j=1^M w_m f(u_m) | ≤(x+1/2)^β+1c_J^(0,β)/c_J^(α,β) 2^α-A(x_0+1/4)^2M∏_j=0^2M-1 |α - A - j|/(2 M)!∏_j=0^M b_j (μ_n)Since α - A ∈ (-1,1), then | F_n(x) - F_n(x) |≤ 2^α + β + 1-Ac_J^(0,β)/c_J^(α,β)(x_0+1/4)^2M+β+1∏_j=0^M b_j (μ_n)The result follows by direct computation of 2^α + β + 1-Ac_J^(0,β)/c_J^(α,β).We verify spectral convergence of the scheme empirically in Figure <ref>, which also illustrates that for the test cases shown, one could choose M to bound errors uniformly in x. The figure also shows that qualitative error behavior is uniform even for extremely large values of n and/or α or β. Based on these results, taking M=10 appears sufficient uniformly over all (α, β, n).[Not shown: we have verified this for numerous values of (α, β, n).] Finally, extending the estimate in (<ref>) to the case x > x_0 can be accomplished by permuting α and β as is done for that case in Algorithm <ref>. §.§ Half-line Freud weightsIn this section we consider the half-line Freud measure μ^(α,ρ)_HF as defined in Table <ref>. These algorithms require the recurrence coefficients for μ_H F; these coefficients are in general not easy to compute when α≠ 1. We show in Appendix <ref> that these recurrence coefficients can be determined from the recurrence coefficients for Freud weights, but recurrence coefficients for Freud weights are themselves relatively difficult to tabulate <cit.>. In our computations we use the methodology of <cit.> to compute Freud weight recurrence coefficients (and hence half-line Freud coefficients). We note that the methodology of <cit.> is computationally onerous: For a fixed α and ρ, it required a day-long computation to obtain 500 recurrence coefficients.Again in this subsection we write μ^(α,ρ)_HF = μ and similarly for μ_n, F, F_n, etc. We restore these super- and subscripts when ambiguity arises without them.We accomplish computation of F_n for this measure with largely the same procedure as for Jacobi measures. Like in the Jacobi case, the details of the procedure we use differ depending on whether x is closer to the left-hand end of μ (which is x=0 here), or to the right-hand end of the μ (which is x=∞). We determine this delineation again by means of potential theory. §.§.§ Computation of x_0As with the Jacobi case, we take x_0 to be the midpoint of the “essential" support for p_n^2(x) μ_HF^(α,ρ), the latter of which is approximately the support of the weighted equilibrium measure associated to [μ_HF^(α, ρ)]^1/2n. However, directly computing the support of this equilibrium measure is difficult. Thus, we resort to a more ad hoc approach.To derive our approach, we first compute the exact support of special cases of our Half-line Freud measures.* The support of the weighted equilibrium measure associated to the measureμ(x)= x^s exp(-λ x), x ∈ [0, ∞), s≥ 0 λ > 0is the interval [θ - Δ, θ + Δ], with these values given by <cit.> θ = s+1/λ,Δ = √(2s+1)/λThis interval is the “essential" support for any function of the form p_n(x) (μ(x))^n where p_n is a degree-n polynomial.* The second special case is for arbitrary α, but ρ = 0. The support of the weighted equilibrium measure for √(μ_HF^(α,0)) in this case is the interval [0, k_n(α)], where k_n(α) = k_nn^1/α(2 √(π)Γ(α)/Γ(α + 1/2))^1/α.are the Mhaskar-Rakhmanov-Saff numbers for √(μ_HF^(α,0)) <cit.>.We now derive our approximation for the case of general α, ρ. To approximate where p_n √(μ_HF^(α,ρ)) is supported, consider p_n(x) ( μ_HF^(α, ρ)(x) )^1/2 ∝ p_n(x) [ x^ρ/2nexp(-1/2n x^α) ]^nu = x^α= p(u^1/α) [ u^ρ/2 n αexp(-1/2n u) ]^n= q_n/α(u) [ u^ρ/2nexp(-α/2 n u ) ]^n/α,where we have introduced q_n/α, which is a “polynomial" of “degree" n/α[More formally, it is a potential with “mass" n/α.]. Note that in the variable u, the weight function under square brackets in the last expression is of the form (<ref>). Concepts in potential theory extend to generalized notions of polynomial degree, and so we may apply our formulas for θ and Δ with s = ρ/2n and λ = α/2 n. These formulas imply that the “essential" support for the u variable isθ - Δ≤ u = x^α≤θ + ΔTherefore, to obtain appropriate limits on the variable x, we raise the endpoints θ±Δ to the 1/α power. However, we now require a correction factor. To see why, we compute the right-hand side of our computed support interval when ρ = 0(θ + Δ)^1/α = ( 2n + 2 √( n^2))^1/α = 2^2/α n^1/α,and compare this with the exact value k_n(α) computed above. We note that while k_n(α) ∼ n^1/α matches the n-behavior of (<ref>), the constant is wrong. We thus multiply the endpoints ( θ±Δ)^1/α by the appropriate constant to match the ρ = 0 behavior of k_n(α). The net result then, for arbitrary α, ρ, is the approximation a_±(n, α, ρ)= ( √(π)Γ(α)/2 Γ(α + 1/2))^1/α( ρ + 2 n ± 2 √(n^2 + n ρ))^1/αx_0(n;μ_HF^(α,ρ))= 1/2( a_-(n, α, ρ) + a_+(n, α, ρ)) Figure <ref> compares the intervals demarcated by a_- and a_+ versus F_n^-1([0.01, 0.99]), the latter of which contains “most" of the support of F_n. §.§.§ Computing F_n(x)First assume that x ≤ x_0. Then F_n(x)= 1/c_HF^(α,ρ)∫_0^x p_n^2(t) t^ρexp(-t^α) tu = 2 t/x - 1=(x/2)^ρ+11/c_HF^(α,ρ)∫_-1^1 exp(-(x/2)^α(u+1)^α) p_n^2(x/2(1+u)) (1+u)^ρu.We recognize a portion of the integrand as a Jacobi measure, and use successive measure modifications to define μ_n: μ_n(u) p_n^2(x/2(1+u)) μ_J^(0,ρ)I(x)∑_m=1^M w_m exp(-(x/2)^α(u_m+1)^α) F_n(x)= (x/2)^ρ+1c_J^(0,ρ)/c_HF^(α,ρ) I(x) The recurrence coefficients of μ_n can be computed via polynomial measure modifications on the roots u_j,n = 2 x_j,n/x - 1, where x_j,n are the roots of p_n(·). A detailed algorithm is given in Algorithm <ref>.[H] InputinputOutputoutput α > 1/2, ρ > -1: half-line Freud weight parameters n∈_0 and x ≥ 0: Order of induced measure μ_n and value x. M ∈: Quadrature order for approximate computation of F_n(x). F_n(x) Compute n zeros, {x_j,n}_j=1^n of p_n = p_n(·; μ^(a,ρ)), and leading coefficient γ_n of p_n.Compute recurrence coefficients a_j and b_j associated to μ_J^(0,ρ) for 0 ≤ j ≤ M + 2 n.j=1, …, n Quadratic measure modification (<ref>): update a_n and b_n for n = 0, …, M + 2(n-j) with modification factor (u - (2 x_j,n/x - 1) )^2. Scale b_0b_0 exp(1/nlogγ_n^2 ). Compute M-point Gauss quadrature (u_m, w_m)_m=1^M associated with measure μ_n via {(a_j, b_j)}_j=0^M.Compute the integral I in (<ref>), and return F_n(x; μ^(α,ρ)) given by (<ref>). Computation of F_n(x) for μ = μ_HF^(α,ρ) corresponding to a half-line Freud weight. Now assume that x ≥ x_0. We compute F_n^c directly in a similar fashion as we did for F_n. We haveF^c_n(x)= 1/c_HF^(α, ρ)∫_x^∞ p_n^2(t) t^ρexp(-t^α) tu=t-x=1/c_HF^(α, ρ)∫_0^∞ p_n^2(u+x) (u+x)^ρexp(-(u + x)^α) u= exp(-x^α) c_HF^(α,0)/c_HF^(α, ρ)∫_0^∞ p_n^2(u+x) (u+x)^ρexp(u^α + x^α - (u + x)^α) exp(-u^α)uWe again use this to define a new measure μ_n and an associated M-point Gauss quadrature (u_m, w_m)_m=1^M. The recurrence coefficients for μ_n are computable via polynomial measure modifications. This results in the approximationμ_n(u) p_n^2(u+x) μ_HF^(α,0)I(x)= ∑_m=1^M w_m (u_m + x)^ρexp(u_m^α + x^α - (u_m + x)^α) F^c_n(x)= exp(-x^α)c_HF^(α, 0)/c_HF^(α, ρ) I(x) A more detailed algorithm is given in Algorithm <ref>. Of course, once F_n^c is computed we may compute F_n(x) = 1 - F_n^c(x). [H] InputinputOutputoutput α > 1/2, ρ > -1: generalized Freud weight parameters n∈_0 and x ≥ 0: Order of induced polynomial and measure μ_n and value x. M ∈: Quadrature order for approximate computation of F^c_n(x). F^c_n(x) Compute n zeros, {x_j,n}_j=1^n of p_n = p_n(·; μ^(a,ρ)), and leading coefficient γ_n of p_n.Compute recurrence coefficients a_j and b_j associated to μ_HF^(α,0) for 0 ≤ j ≤ M + 2 n.j=1, …, nQuadratic measure modification (<ref>): update a_n, b_n for n=0, …, M + A + 2(n-j) with modification factor (u - ( x_j,n - x ))^2. Scale b_0b_0 exp(1/nlogγ_n^2 ). Compute M-point Gauss quadrature (u_m, w_m)_m=1^M associated with measure μ_n via {(a_j, b_j)}_j=0^M.Compute the integral I in (<ref>), and return F^c_n(x; μ^(α,ρ)) given by (<ref>). Computation of F^c_n(x) for μ^(α,ρ)_HF corresponding to a half-line Freud weight. Errors between F_n and computational approximations F_n are shown in Figure <ref>. We see that we require a much larger value of M in order to achieve accurate approximations compared to the Jacobi case. We believe this to be the case due to the function exp(u^α + x^α - (u+x)^α) appearing in the integral I(x). Note that for α = 1 this function becomes unity and so does not adversely affect the integral; this results in the much more favorable error plot on the left in Figure <ref>. The different behavior for α = 1 leads us to make customized choices in this case: we choose M = 25 for all values of n and ρ, and we take x_0(n) ≡ 50.Whereas for α≠ 1 our tests suggest that M = n + 10 is sufficient to achieve good accuracy, and we take x_0 as the average of a_± as given in (<ref>). §.§ Freud weightsFinally, consider the Freud measure μ_F^(α,ρ) defined in Table <ref>.An especially important case occurs for α= 2, ρ = 0, corresponding to the classical Hermite polynomials. Note that the recurrence coefficients for general values of a and ρ are not known explicitly, but their asymptotic behavior has been established <cit.>.It is well-known that Freud weights are essentially half-line Freud weights in disguise under a quadratic map. It is not surprising then that we may express primitives of induced polynomial measures for Freud weights in terms of the associated half-line Freud primitives.Let parameters (α,ρ) define a Freud weight and associated measure μ_F^(α,ρ). Then for x ≤ 0, F_n(x; μ_F^(α,ρ)) = {[1/2 F_n/2^c( x^2; μ_HF^(α/2,(ρ-1)/2)), neven; 1/2 F_(n-1)/2^c (x^2; μ_HF^(α/2, (ρ+1)/2)),nodd ].For x ≥ 0, we haveF_n(x; μ_F^(α,ρ)) = 1 - F_n(-x; μ_F^(α,ρ)). Note that with expressions (<ref>) and (<ref>), an algorithm for computing F_n(·; μ_F) is straightforward to devise utilizing Algorithm <ref> for F_n^c(·; μ_HF).The result (<ref>) follows easily from the fact that the integrand in (<ref>) defining F_n is an even function. To prove the main portion of the theorem, expression (<ref>), we require the following result relating Freud orthonormal polynomials to half-line Freud orthonormal polynomials.Let ρ > -1 and α > 1 be parameters that define a Freud measure μ^(α,ρ)_F with associated orthonormal polynomial family p_n(x) = p_n(x; μ_F^(α,ρ)). Define two sets of half-line Freud parameters (α_∗, ρ_∗) and ( α_∗∗, ρ_∗∗) and the corresponding half-line Freud measures and polynomials: α_∗ α/2,ρ_∗ ρ-1/2, p_∗,n(x)p_n(x; μ_HF^(α_∗,ρ_∗)),α_∗∗ α/2,ρ_∗∗ ρ+1/2, p_∗∗,n(x)p_n(x; μ_HF^(α_∗∗,ρ_∗∗)) Also define the constanth^2 = h^2(α,ρ) Γ( ρ + 1/α)/Γ( ρ+3/α)Then, for all n ≥ 0, p_2 n(x)= p_∗,n( x^2 ), p_2 n+1(x)= h x p_∗∗,n( x^2 ), The following equalities may be verified via direct computation using the definitions (<ref>) and (<ref>) along with the expressions in Table <ref>:c_F^(α,ρ) = c_HF^(α_∗,ρ_∗),c_F^(α,ρ) = h^2 c_HF^(α_∗∗,ρ_∗∗)The proof of this lemma relies on the change of measure t ↦ x^2. We haveδ_m,n = ∫_0^∞ p_∗,n(t) p_∗,m(t) μ_HF^(α_∗,ρ_∗)(t) = 1/c_HF^(α_∗,ρ_∗)∫_0^∞ p_∗,n (t) p_∗,m(t) t^ρ_∗exp(-t^α_∗) t= 2/c_HF^(α_∗,ρ_∗)∫_0^∞ p_∗,n(x^2) p_∗,m(x^2) x^2 ρ_∗+1exp(-x^2α_∗) x= 1/c_F^(α,ρ)∫_-∞^∞ p_∗,n(x^2) p_∗,m(x^2) |x|^2 ρ_∗+1exp(-|x|^2α_∗) x= ∫_-∞^∞ p_∗,n(x^2) p_∗,m(x^2) μ_F^(α,ρ)(x).This relation shows that the family {p_∗,n(x^2)}_n=0^∞ are polynomials of degree 2 n that are orthonormal under a Freud weight with parameters α = 2α_∗ and ρ = 2 ρ_∗ + 1. Using nearly the same arguments, but with the family p_∗∗,n, yields the relationδ_m,n = ∫_0^∞ p_∗∗,n(t) p_∗∗,m(t) μ_HF^(α_∗∗,ρ_∗∗)(t)= 1/c_HF^(α_∗∗,ρ_∗∗)∫_0^∞ p_∗∗,n(t) p_∗∗,m(t) t^ρ_∗∗exp(-t^α_∗∗) t= 2/c_HF^(α_∗∗,ρ_∗∗)∫_0^∞ p_∗∗,n(x^2) p_∗∗,m(x^2) x^2 ρ_∗∗+1exp(-x^2α_∗∗) t= h^2/c_F^(α,ρ)∫_-∞^∞ x p_∗∗,n(x^2)x p_∗∗,m(x^2) |x|^2 ρ_∗∗-1exp(-|x|^2α_∗∗) t= h^2 ∫_-∞^∞ x p_∗∗,n(x^2)x p_∗∗,m(x^2) μ_F^(α,ρ)(x).This relation shows that the family {h x p_∗∗,n(x^2)}_n=0^∞ are polynomials of degree 2 n+1 that are orthonormal under the same Freud having parameters α = 2α_∗∗ and ρ = 2 ρ_∗∗ - 1. We also have that x p_∗∗,n(x^2) is orthogonal to p_∗,m(x^2) under a(ny) Freud weight for any n, m because of even-odd symmetry. Thus, define P_n(x) = {[p_∗,n/2(x^2),neven; h x p_∗∗,(n-1)/2(x^2), nodd ].Then {P_n}_n=0^∞ is a family of degree-n polynomials (with positive leading coefficient) orthonormal under a (α, ρ) Freud weight. Therefore, P_n ≡ p_n. We can now give the Assume x ≤ 0. Then F_n(x; μ_F^(α,ρ))= 1/c_F^(a, ρ)∫_-∞^x p_n^2(t) |t|^ρexp(- |t|^α) t,u = t^2=1/2 c_F^(a, ρ)∫_x^2^∞ p_n^2( -√(u)) |u|^ρ_∗exp(-|u|^α_∗) u,where (α_∗, ρ_∗) are as defined in (<ref>). By Lemma <ref>, p^2_n(-√(u)) = {[p^2_∗,n/2(u),neven; h^2 u p^2_∗∗,(n-1)/2(u), nodd ].where (α_∗∗, ρ_∗∗) are defined in (<ref>). Then if n is even, we haveF_n(x; μ_F^(α_∗,ρ_∗))=1/2 c_HF^(α_∗, ρ_∗)∫_x^2^∞ p^2_∗,n/2(u) |u|^ρ_∗exp(-|u|^α_∗) u, = 1/2 F_n/2^c( x^2; μ_HF^(α_∗,ρ_∗)),where we recall the equalities (<ref>) related c_F to c_F∗. Similarly, if n is odd we haveF_n(x; μ_F^(α,ρ))=h^2/2 c_F^(α, ρ)∫_x^2^∞ u p^2_∗∗,(n-1)/2(u) |u|^ρ_∗exp(-|u|^α_∗) u=1/2 c_HF^(α_∗∗, ρ_∗∗)∫_x^2^∞ p^2_∗∗,(n-1)/2(u) |u|^ρ_∗∗exp(-|u|^α_∗∗) u= 1/2 F_(n-1)/2^c (x^2; μ_HF^(α_∗∗, ρ_∗∗))The combination of these results proves (<ref>). § INVERTING INDUCED DISTRIBUTIONSWe have discussed at length in previous sections algorithms for computing F_n(x) defined in (<ref>) for various Lebesgue-continuous measures μ. The central application of these algorithms we investigate in this paper is actually in the evaluation of F_n^-1(u) for u ∈ [0,1]. We accomplish this by solving for x in the equationF_n(x) - u= 0, u∈ [0,1],using a root-finding method. Our first step involves providing an initial guess for x. §.§ Computing an initial intervalWe use s_± to denote the (possibly infinite) endpoints of the support of μ:-∞≤ s_-inf(μ), s_+sup(μ) ≤∞.Now let u ∈ [0,1]. Our first step in finding F_n^-1(u) is to compute two values x_- and x_+ such that x_- ≤ F^-1_n(u) ≤ x_+.Our procedure for identifying an initial interval containing F_n^-1(u) leverages the Markov-Stiltjies inequalities for orthogonal polynomials. These inequalities state that empirical probability distributions of Gauss quadrature rules generated from a measure bound the distribution function for this measure. Precisely:Let μ be a probability measure onwith an associated orthogonal polynomial family. For any N ∈, let { x_k,N, w_k,N}_k=1^N denote the N μ-Gaussian quadrature nodes and weights, respectively. Then:∑_k=1^m-1 w_k,N≤ F(x_m,N) ≤∑_k=1^m w_k,N, 1≤ m ≤ N. Let {p_j,n}_j=0^∞ denote the sequence of polynomial orthonormal under the induced measure μ_n. Given N ∈, let {z_k,N, v_k,N}_k=1^N denote the N-point μ_n-Gaussian quadrature rule, i.e., z_k,N are the N ordered zeros of p_N,n(x), with v_k,N the associated weights. Since μ_n is a probability measure, then ∑_k=1^N v_k,N = 1. As such, given u ∈ [0,1] we can always find some m ∈{1, …, N} such that ∑_k=1^m-1 v_k,N≤ u ≤∑_k=1^m v_k,N.Then, defining z_0,N≡ s_- and z_N+1,N≡ s_+ for all N and n, we haveF_n(z_m-1,N) ≤∑_k=1^m-1 v_k,N≤ u ≤∑_k=1^m v_k,N≤ F_n(z_m+1,N)Since F_n is non-decreasing, this is equivalently,z_m-1,N≤ F_n^-1(u) ≤ z_m+1,NThus, if we find an m such that (<ref>) holds, then (<ref>) holds withx_-= z_m-1,N, x_+= z_m+1, NWhen supp μ is bounded, the N-asymptotic density of orthogonal polynomial zeros on supp μ guarantees that we can find a bounding interval with endpoints x_± of arbitrarily small width by taking N sufficiently large. The difficulty is that we therefore require the zeros z_k,N and the quadrature weights v_k,N of the induced measure, which in turn require knowledge of the three-term recurrence coefficients associated to μ_n. These can be easily computed from the coefficients associated to μ; sinceμ_n(x) = p_n^2(x) μ(x) = γ_n^2 ∏_j=1^n (x - x_j,n)^2 μ(x),then we may again iteratively utilize the quadratic modification algorithm given by (<ref>) to compute these recurrence coefficients, which are iteratively quadratic modifications of the μ-coefficients. (Note that this is precisely the procedure proposed in <cit.> for computing these coefficients.) §.§ BisectionFor simplicity, the root-finding method we employ to solve (<ref>) is the bisection approach. More sophisticated methods may be used, with the caveat that the derivative of the function, F'(x) = p_n^2(x) μ(x), vanishes wherever p_n has a root. We have found that a naive application of Newton's method for root-finding often runs into trouble, even with a very accurate initial guess.The bisection method for root-finding applied to (<ref>) starts with an initial guess for an interval [x_-, x_+] containing the root x, and iteratively updates this interval viax_- 1/2(x_- + x_+)15pt if15pt F_n(1/2(x_- + x_+) ) ≤ u x_+ 1/2(x_- + x_+)15pt if15pt F_n(1/2(x_- + x_+) ) > uAfter a sufficient number of iterations so that x_+ - x_- and/or |F(x_-) - F(x_+)| is smaller than a tunable tolerance parameter, one can confidently claim to have found the root x to within this tolerance. A good initial guess for x_± lessens the number of evaluations of F_n in a bisection approach and thus accelerates overall evaluation of F_n^-1. The overall algorithm for solving (<ref>) is to (i) compute the recurrence coefficients associated with μ_n in (<ref>) via quadratic measure modifications, (ii) compute order-N μ_n-Gaussian quadrature nodes and weights z_j,N and v_j,N, respectively, (iii) identify m such that (<ref>) holds so that x_± may be computed in (<ref>), and (iv) iteratively apply the bisection algorithm with the initial interval defined by x_± using the evaluation procedures for F_n outlined in Section <ref>.§ APPLICATIONSThis section discusses two applications of sampling from univariate induced measures. Both these applications consider multivariate scenarios, and are based on the fact that many “interesting" multivariate sampling measures are additive mixtures of tensorized univariate induced measures. Our first task is to introduce notation for tensorized orthogonal polynomials.We will write d-variate polynomials using multi-index notation: λ∈_0^d denotes a multi-index with components λ = (λ_1, …, λ_d) and magnitude |λ| = ∑_j=1^d λ_j. A point x ∈^d has components x = (x_1, …, x_d ), and x^λ = ∏_j=1^d x_j^λ_j. A collection of multi-indices will be denoted Λ⊂_0^d; we will assume that N = |Λ| is finite. Let μ be a tensorial measure on ^d such that each of its d marginal univariate measures μ^(j), j=1, …, d admits a μ^(j)-orthonormal polynomial family { p_j,n}_n=0^∞ on , satisfying∫_ p_j,n(x_j) p_j,m(x_j) μ^(j)(x_j)= δ_m,n, n, m ∈_0, j = 1, …, d.A tensorial μ allows us to explicitly construct an orthonormal polynomial family for μ from univariate polynomials,p_λ(x) ∏_j=1^d p_j, λ_j(x_j).These polynomials are an L^2_μ-orthonormal basis for the subspace P_Λ, defined asP_Λ = span{ p_λ | λ∈Λ}.Under the additional assumption that the index set Λ is downward-closed, then P_Λ = span{ x^λ |λ∈Λ}.We extend our definition of induced polynomials to this tensorial multivariate situation. For any λ∈Λ, the order-λ induced measure μ_λ is defined as μ_λ(x)p_λ^2(x) μ(x) = ∏_j=1^d p^2_j, λ_j(x_j) μ^(j)(x) = ∏_j=1^d μ^(j)_λ_j,where μ^(j)_λ_j is the (univariate) order-λ_j induced measure for μ^(j) according to the definition (<ref>). Thus, μ_λ is also a tensorial measure.§.§ Optimal polynomial discrete least-squaresThe goal of this section is description of a procedure utilizing the algorithms above for performing discrete least-squares recovery in a polynomial subspace using the optimal (fewest) number of samples. The procedure we discuss was proposed in <cit.> and is based on the foundational matrix concentration estimates for least-squares derived in <cit.>. Let f: ^d → be a d-variate function. Given (i) a tensorial probability measure μ admitting an orthonormal polynomial family, and (ii) a dimension-N polynomial subspace P_Λ, we are interested in approximating the L^2_μ-orthogonal projection of f onto P_Λ. This projection is given explicitly by Π_Λ f= ∑_λ∈Λ c^∗_λ p_λ(x), c^∗_λ = ∫_^d f(x) p_λ(x) μ(x).One way to approximate the integral defining the coefficients c^∗_λ is via a Monte Carlo least-squares procedure using M collocation samples of the function f(x). Let {X_m}_m=1^M denote a collection of M independent and identically distributed random variables on ^d, where we leave the distribution of X_m unspecified for the moment. A weighted discrete least-squares recovery procedure approximates c^∗_λ with c_λ, computed as{c_λ}_λ∈Λ = _d_λ∈1/M∑_m=1^M w_m [ f(X_m) - ∑_λ∈Λ d_λ p_λ(X_m)]^2,where w_m are positive weights. One supposes that if the distribution of X_m and the weights w_m are chosen intelligently, then it is possible to recover the N coefficients c_λ with a relatively small number of samples M; ideally, M should be close to N. The analysis in <cit.> codifies conditions on a required sample count M so that the minimization procedure above is stable, and so that the recovered coefficients c_λ are “close" to c_λ^∗; these conditions depend on the distribution of X_m, on w_m, on μ, and on P_Λ. Since μ and P_Λ are specified, the goal here is identification of an appropriate distribution for X_m and weight w_m.Using ideas proposed in <cit.> the results in <cit.> show that, in the context of the analysis in <cit.>, the optimal choice of probability measure μ_X for sampling X_m and weights w_m that achieves a minimal sample count M areμ_X = μ_Λ = 1/N∑_λ∈Λμ_λ, w_m= N/∑_λ∈Λ p_λ^2(X_m).The precise quantification of the sample count and error estimates can be formulated using an algebraic characterization of (<ref>). Define matrices V∈^M × N and W∈^M × M, and vectors c∈^N and f∈^M as follows:(V)_m,n = p_λ(n)(X_m), (W)_j,k = w_j δ_j,k, (c)_n = c_λ(n), (f)_m = f(X_m),where λ(1), …, λ(N) represents any enumeration of elements in Λ. We use · on matrices to denote the induced ℓ^2 norm. The algebraic version of (<ref>) is then to compute c that minimizes the the least-squares residual of √(W)Vc = √(W)f. The following result holds.Let 0 < δ < 1,and r > 0 be given, and define c_δδ + (1-δ) log(1-δ) ∈ (0,1). Draw M iid samples { X_m}_m=1^M from μ_X, and let the coefficients c_λ be those recovered from (<ref>). IfM/log M≥ N 1+r/c_δThen Pr[ V^T WV - I > δ]≤ 2 M^-rf - T_L(∑_λ∈Λ c_λ p_λ(·)) _L^2_μ ≤[ 1 + 4c_δ/(1+r) log M]f - Π_Λ f _L^2_μ + 8 f_L^∞(μ) M^-r The free parameter r is a tunable oversampling rate; δ represents the guaranteeable proximity of V^T WV to I. We emphasize that by choosing μ_X = μ_Λ with the weights defined as in W, then the size of M as required by (<ref>) depends only on the the cardinality N of Λ, and not on its shape. Furthermore, the criterion M/log M ≳ N is optimal up to the logarithmic factor. Also, the statements above hold uniformly over all multivariate μ. Note that the optimal sampling measure μ_X is an additive mixture of induced measures and can be easily sampled, assuming μ_λ can be sampled. Sampling from μ_X defined above is fairly straightforward given the algorithms in this paper: (1) given Λ choose an element λ randomly using the uniform probability law, (2) generate d independent, uniform, continuous random variables U_j, j=1, …, d each on the interval [0,1], (3) compute X ∈^d asX = ( F^-1_λ_1(U_1; μ^(1)), F^-1_λ_2(U_2; μ^(2)),…, F^-1_λ_d(U_d; μ^(d)) ).Then X is a sample from the probability measure μ_X. Note that the work required to sample X requires only d samples from a univariate induced measure. The procedure above is essentially as described by the authors in <cit.>; this paper gives a concrete computational method to sample from μ_Λ for a relatively general class of measures μ (i.e., those formed by arbitrary finite tensor products of Jacobi, half-line Freud, and/or Freud univariate measures). Thus, the algorithms in this paper along with the specifications (<ref>) allow one to perform optimal discrete least-squares using Monte Carlo sampling for approximation with multivariate polynomials. §.§ Weighted equilibrium measures On ^d, consider the special case μ(x) = exp(-x^2), where · is the Euclidean norm on ^d. The weighted equilibrium measure μ^∗ is a probability measure that is the weak limit of the summationslim_n →∞∑_|λ| ≤ n p_λ^2(x/√(2n)) μ(x/√(2n)) ⇒μ^∗(x).The form for μ^∗ is not currently known, but the authors in <cit.> conjecture that μ^∗ has support on the unit ball with densityμ^∗(x)?= g_d( x) = C_d (1 - x^2)^d/2, x ≤ 1,where C_d = (π)^-(d+1)/2Γ(d+1/2). If X on ^d is distributed according to g_d, then the cumulative distribution function associated to X is G_d(r)Pr[X ≤ r ] = K ∫_0^r g_d(x) r^d-1x,where the r^d-1 factor in the integrand is the ^d Jacobian factor for integration in spherical coordinates, and K is the associated normalization constant. Note that the cumulative distribution function G_d is a mapped (normalized) incomplete Beta function with parameters a = d/2 and b = 1 + d/2,G_d(r)= 1/B(d/2, 1 + d/2)∫_0^r^2 t^d/2 (1-t)^1 + d/2t,where B(·,·) is the Beta function. With d=1, the veracity of this limit is known <cit.>. Using the algorithms in this paper, we can empirically test the conjecture. Precisely, defining Λ_n {λ∈_0^d | |λ| ≤ n }, then the conjecture for (<ref>) readslim_n →∞∑_|λ| ≤ n p_λ^2(x/√(2n)) μ(x/√(2n)) = lim_n →∞μ_Λ_n(x/√(2n)) ?⇒ C_d ( 1 -x^2 )^d/2Our procedure for testing this conjecture is as follows: for a fixed d and large n, we generate M iid samples { X_m }_m=1^M distributed according to μ_Λ_n, and compute the empirical distribution function associated with the ensemble of scalars { X_m /√(2 n)}_m=1^M. We show in Figure <ref> that indeed for large n that empirical distributions associated with these ensembles match very closely with the distribution function G_d(r), giving evidence that supports, but does not prove, the conjecture (<ref>).§ CONCLUSIONSWe have developed a robust algorithm for the evaluation of induced polynomial distribution functions associated with a relatively wide class of continuous univariate measures. Our algorithms cover all classical orthogonal polynomial measures, and are equally applicable on bounded or unbounded domains. The algorithm leverages several properties of orthogonal polynomials in order to attain stability and accuracy, even for extremely large values of parameters defining the measure or polynomial degree. All computations have been tested up to degree n=1000 and were found to be stable. The ability to evaluate induced distributions allows the possibility to exactly sample from additive mixtures of these measures. Such additive mixtures define sampling densities that are known to be optimal for multivariate discrete least-squares polynomial approximation algorithms, and allow us to provide supporting empirical evidence for an asymptotic conjecture involving weighted pluripotential equilibrium measures.§ AUXILIARY RECURRENCES For some algorithmic tasks that we consider, the three-term recurrence (<ref>) for the p_n does not provide a suitable computational procedure due to floating-point under- and over-flow. This happens in two particular cases: * If x is far outside μ, then p_n(x) becomes very large and causes numerical overflow (the quantity grows like x^n). We will need to evaluate p_n(x)/p_n-1(x) for large x and potentially large n. (When μ is infinite, one can interpret “far outside μ" to be defined using the potential-theoretic Mhaskar-Rakhmanov-Saff numbers for √(μ(x)).)* When x is inside μ, we will need to evaluate p_n^2(x)/∑_j=0^n-1 p_j^2(x). For large enough n, a direct computation causes numerical overflow.We emphasize that (<ref>) is quite stable and sufficient for most practical computations requiring evaluations of orthogonal polynomials. The situations we describe above (which occur in this paper) are relatively pathological.§.§ Ratio evaluationsWe consider the first case described above. With n fixed, suppose that either x > max p_n-1^-1(0), or x < min p_n-1^-1(0). Then by the interlacing properties of orthogonal polynomial zeros, p_j(x) ≠ 0 for all j=0, … n-1. In this case, the ratior_j(x)p_j(x)/p_j-1(x), 1 ≤ j < n,is well-defined, with r_0p_0. A straightforward manipulation of (<ref>) yields√(b_j) r_j(x)= x - a_j - √(b_j-1)/r_j-1(x), 1 ≤ j < n.The recurrence (<ref>) is a more stable way to compute r_j(x) when x is very large. In practice we can computationally verify that x lies outside the zero set of p_n-1 with 𝒪(n) effort (e.g., <cit.> for a crude but general estimate). In the context of this paper, this condition is always satisfied whenever we require an evaluation of r_n(x).§.§ Normalized polynomialsIn the second case, consider a different normalization of p_n:C_n(x)p_n(x)/√(∑_j=0^n-1 p_j^2(x)), n> 0, x∈with C_0 ≡ p_0 = √(1/b_0). A manipulation of the three-term recurrence relation (<ref>) yields the following recurrence for C_n: C_0(x)= 1/√(b_0), C_1(x)= 1/√(b_1)( x - a_0 ), C_2(x)= 1/√(b_2)√(1 + C_1^2(x))[ (x - a_1) C_1(x) - √(b_1)] C_n+1(x)= 1/√(b_n+1)√(1 + C_n^2(x))[ (x - a_n) C_n(x) - √(b_n)C_n-1(x)/√(1 + C_n-1^2(x))],10pt n ≥ 2 Note that C_n(x) essentially behaves like r_n outside a compact interval containing the zero set of p_n; however, C_n is well-defined and well-behaved inside this compact interval, unlike r_n. The polynomials p_n may be reproduced from knowledge of C_n:p_n(x)= C_0 C_n(x) ∏_j=1^n-1√(1 + C_j^2(x)), n> 0. § POLYNOMIAL MEASURE MODIFICATIONS We will need to compute recurrence coefficients for the modified measures with densitiesμ(x)= ±(x - y_0 ) μ(x), y_0∉supp μ μ(x)= (x - z_0 )^2 μ(x), z_0∈where we assume that the recurrence coefficients of μ are available to us. Here, both y_0 and z_0 are some fixed real-valued numbers. In the first case (a linear modification) we assume y_0 ∉μ and choose the sign to ensure that μ(x) is positive for x ∈μ. Assuming the recurrence coefficients a_n and b_n for μ are known, the problems of computing the recurrence coefficients ã_n and b̃_n for μ, and of computing the recurrence coefficients a_n and b_n for μ, are well-studied and have constructive computational solutions <cit.>.We use the auxiliary variables defined in Appendix <ref> to accomplish measure modifications. The linear and quadratic modification recurrence coefficients have the following forms (cf. <cit.>): a_n= a_n + Δ a_n,b_n= b_n Δ b_n,a_n= a_n+1 + Δ a_n,b_n= b_n+1Δ b_n The correction factors for n > 0 are given byΔ a_n= √(b_n+1)/r_n+1(y_0) - √(b_n)/r_n(y_0),Δ b_n= √(b_n+1) r_n+1(y_0)/√(b_n) r_n(y_0),Δ a_n= √(b_n+2) C_n+2(y_0) C_n+1(y_0)/√(1 + C_n+1^2(y_0)) - √(b_n+1) C_n+1(y_0) C_n(y_0)/√(1 + C_n^2(y_0)),Δ b_n= 1 + C^2_n+1(y_0)/1 + C_n^2(y_0)For n=0 they take the special formsΔ a_0= √(b_1)/r_1(y_0),Δ b_0= √(b_1) r_1(y_0), Δ a_0= √(b_2) C_2(y_0) C_1(y_0)/√(1 + C_1^2(y_0)) - √(b_1) C_1(y_0) C_0(y_0)/√(1 + C_0^2(y_0)),Δ b_0= 1+C_1^2(y_0)/C_0^2Above, r_n(x) = r_n(x; μ) and C_n(x) = C_n(x; μ) are the functions associated with the measure μ and so may be readily evaluated using (<ref>) and (<ref>). Note that if we only have a finite number of recurrence coefficients, {a_n, b_n }_n=0^M for μ, then a linear modification can only compute modified coefficients up to index M-1, and a quadratic modification can only compute coefficients up to index M-2. § FREUD AND HALF-LINE FREUD RECURRENCE COEFFICIENTSFor both cases of Freud measures with α = 2 (generalized Hermite polynomials), and generalized Freud measures with α = 1 (generalized Laguerre polynomials), explicit forms for the recurrence coefficients are known. However, the situation is more complicated for other values of α. We give an extension of Lemma <ref>: Recurrence coefficients of generalized Freud weights may be computed from those of Freud weights.Let parameters (α,ρ) define a Freud weight having recurrence coefficients {b_n}_n=0^∞. (The a_n coefficients vanish because the weight is even.) Define (α_∗,ρ_∗) and (α_∗∗, ρ_∗∗) as in (<ref>), along with the associated generalized Freud measures μ_F∗^(α_∗,ρ_∗) and μ_F∗^(α_∗∗, ρ_∗∗) and their recurrence coefficients {(a_∗,n, b_∗,n)}_n=0^∞ and {(a_∗∗,n, b_∗∗,n)}_n=0^∞, respectively. Then, for all n: a_∗,0 = b_1, b_∗,0 = b_0, a_∗,n = b_2 n + b_2 n + 1, b_∗,n = b_2 n b_2 n -1, n≥ 1 Furthermore,b_0= b_∗,0, b_1= a_∗,0, b_2n = b_∗,n/b_2n-1, b_2n+1 = a_∗,n - b_2n, n≥ 1This result implies that one may either use Freud weight recurrence coefficients to compute Half-line Freud weight recurrence coefficients, or vice versa. The proof is a result of Lemma <ref> along with manipulations of the three-term recurrence relation (<ref>).siamplain
http://arxiv.org/abs/1704.08465v1
{ "authors": [ "Akil Narayan" ], "categories": [ "math.NA", "33C45, 65D15" ], "primary_category": "math.NA", "published": "20170427080315", "title": "Computation of Induced Orthogonal Polynomial Distributions" }
=6.2in =8.8in =0.2in =-0.1in
http://arxiv.org/abs/1704.08428v2
{ "authors": [ "H. S. Köhler" ], "categories": [ "nucl-th" ], "primary_category": "nucl-th", "published": "20170427040830", "title": "Nuclear response Functions with Realistic Interactions" }
I. A unified approach for OB and WR starsInstitut für Physik und Astronomie, Universität Potsdam, Karl-Liebknecht-Str. 24/25, D-14476 Potsdam, [email protected] more than two decades, stellar atmosphere codes have been used to derive the stellar and windparameters of massive stars. Although they have become a powerful tool and sufficiently reproduce the observed spectral appearance, they can hardly be used for more than measuring parameters. One major obstacle is their inconsistency between the calculated radiation field and the wind stratification due to the usage of prescribed mass-loss rates and wind-velocity fields.We present the concepts for a new generation of hydrodynamically consistent non-local thermodynamicalequilibrium (non-LTE) stellar atmosphere models that allow for detailed studies of radiation-driven stellar winds. As a first demonstration, this new kind of model is applied to a massive O star. Based on earlier works, the PoWR code has been extended with the option to consistently solve the hydrodynamic equation together with the statistical equations and the radiative transferin order to obtain a hydrodynamically consistent atmosphere stratification. In these models, the whole velocity field is iteratively updated together with an adjustment of the mass-loss rate. The concepts for obtaining hydrodynamically consistent models using acomoving-frame radiative transfer are outlined. To provide a useful benchmark,we present a demonstration model, which was motivated to describe the well-studiedO4 supergiant ζPup. The obtained stellar and wind parameters are within the current range of literature values.For the first time, the PoWR code has been used to obtain a hydrodynamically consistent model for a massive O star. This has been achieved by a profound revision of earlier concepts used for Wolf-Rayet stars. The velocity field is shaped by various elements contributing to the radiative acceleration, especially in the outer wind. The results further indicate that for moredense winds deviations from a standard β-law occur. Coupling hydrodynamics with comoving frame radiative transfer A. A. C. Sander W.-R. Hamann H. Todt R. Hainich T. ShenarReceived February 17, 2017/ Accepted April 18, 2017 ========================================================================================================================§ INTRODUCTIONIn order to understand massive stars and their winds, stellar atmosphere models have become a powerful and widely used instrument. Typically applied for spectroscopic analysis, these models yield quantitative information on the stellar wind propertiestogether with fundamental stellar parameters. The special conditions in stellarwinds lead to the development of sophisticated model atmosphere codes performing several complex calculations. The outer layers are not even close to local thermodynamicalequilibrium (LTE), requiring the population numbers to be calculated from a set ofstatistical equations. For a sufficient treatment, large model atoms with hundreds of levelsin total have to be considered. Furthermore, the line-driven winds of hot stars demand aproper description of the radiative transfer in an expanding atmosphere.Basically three different approaches exist to tackle the radiative transfer problem: The first are analytical descriptions, usually based on the concept of CAK theory <cit.>, which allow for rapid calculation of the radiative acceleration at the cost of several approximations. While the theory has undergone several extensions relaxingthe original assumptions <cit.>, it has opened up the whole area of time-dependent and even multi-dimensional calculations <cit.>. CAK-like concepts are therefore not only used in stellar atmosphere analyses,but also in most cases where a more detailed radiative transfer would be too costly from acomputational standpoint; for example, detailed multi-dimensional hydrodynamical simulations of a stellar cluster or the interaction with a companion <cit.>. An alternative to (semi-)analytical approximations is to calculate the radiative force with the help of Monte Carlo (MC) methods. Motivated already by the work of <cit.>, thisapproach was first applied by <cit.> and later used for a variety of mass-loss studies <cit.>. More recently the application has been widely extended, including velocity field and clumping studies as well as multi-dimensional calculations <cit.>. Using the MC approach allows one to include effects such as multiple line scattering, but is computationally much more expensive than CAK-like calculations, especially in multi-dimensionalapproaches. It is therefore mostly used for fundamental studies and rarely applied when analyzing a particular stellar spectrum.A third method to tackle the radiative transfer is the calculation in the comoving frame (CMF). Built on the conceptual work of <cit.>, it is essentially a brute-force integration over the frequency range, using the advantage that opacity κ and emissivity η are isotropic in the comoving frame. Various studies using a CMF approach have been performed since the 1980s <cit.>, culminating in the development of a handful of stellar atmosphere codes using the CMF radiative transfer either partially or exclusively, including phoenix <cit.>, wmbasic <cit.>, fastwind <cit.>, cmfgen <cit.>, and PoWR <cit.>. For studying stars with denser winds and especially Wolf-Rayet stars, almost all spectral analyses are performed with CMF-based atmosphere codes <cit.>. Due to the complexity of the CMF calculation, these codestypically assume spherical symmetry, allowing for a 1- or 1.5-dimensional treatment, and a stationary wind situation. Given that on top of the two major tasks, that is, solving the statistical equations and the radiative transfer,several further challenges exist, such as iron-line blanketing or the need for a consistent calculation of the temperature stratification in an expanding, non-LTE environment, it is not that surprising that only a few codes exist that can adequately model a stellar atmosphere for a hot star with a dense,line-driven wind. So far, these codesare typically used for either measuring stellar and wind parameters or predicting fluxes and relatedquantities for a given set of parameters. Yet, only a few examples exist where they are usedto actually predict the wind parameters. This lack of examples results from the fact that – at least in the wind part – most stellar atmosphere models use a prescribed velocity field instead of consistently calculating the wind stratification. While this approach is mostlysufficient for the current use of the atmosphere models, it also cuts off a variety of potential applications. In order to open up this perspective, we present a new approach for hydrodynamically consistent stellar atmosphere models using the Potsdam Wolf-Rayet (PoWR) model atmosphere code. Originally starting from earlier efforts for Wolf-Rayet (WR) stars, we have developed a brand new scheme to update the mass-loss rate and the velocity stratification that can finally be applied to both WR and OB models. For the first time, we will present a hydrodynamically consistent PoWR model for an O supergiant, closely reproducingmost of the spectral features for ζ Pup.In Sect. <ref> of this work, we briefly discuss the underlying stationary wind hydrodynamics and introduce a special notation that is helpful in analyzing the statusof PoWR models with respect to hydrodynamical consistency. A special emphasis is given in the following Sect. <ref> on the meaning of the critical point. Afterwards, the basic concepts of the PoWR code and its set of input parameters are outlined in Sect. <ref>. Sect. <ref> then deals with all the techniques used for obtaining a hydrodynamically consistent model before showing and discussing the results for an example model in Sect. <ref>. Finally, conclusions are drawn in Sect. <ref>. § STATIONARY WIND HYDRODYNAMICSIn a hot stellar wind, which we here describe as a one-dimensional, stationary outflow, the accelerations due to radiation andgas pressure have to balance gravity g(r) = GM_∗ r^-2 and inertia d/d r.The corresponding equation of motion is therefored/d r + GM/r^2 = a_rad(r) + a_press(r) ,with a_rad representing the total radiative acceleration, that is,a_rad(r) :=  a_lines(r) + a_cont(r)=  a_lines(r) + a_true cont(r) + a_thom(r),using the same notation as in <cit.>. The terma_press describesthe gas (and potentially turbulence) pressure, that is,a_press(r) := -1/ρdP/dr. To replace the pressure P with the density, we use the equation of state for an idealgas P(r) = ρ(r) · a^2(r) and define a^2(r) := k_B T(r)/μ(r) m_H + 1/2_mic^2 ,with μ being the mean particle mass (including electrons) in units of the hydrogen atom mass m_H, T being the electron temperature,and _mic the microturbulence velocity, which is a free parameter in our models.For vanishing turbulence, a(r) is identical to the isothermal sound speed.With the equation of state together with the equation of continuity Ṁ = 4 π r^2 (r) ρ(r) ,the a_press-term in Eq. (<ref>) can be rewritten in three terms that remove all explicit ρ-dependencies in favor of terms containing only , r, and a <cit.>. As a consequence, the hydrodynamic equation for a spherically-symmetric wind reads( 1 - a^2/^2) d/d r= a_rad - g + 2 a^2/r - d a^2/d r = GM/r^2(Γ_rad - 1) + 2 a^2/r - d a^2/d rwith Γ_rad(r) := a_rad(r) / g(r). For an easier reading, we have dropped the explicit notation of the radius dependencies in Eqs. (<ref>) and (<ref>) and continue to do so unless absolutely necessary for the context.Using the two definitionsℱ̃:= 1 - Γ_rad - 2 a^2 r/GM + r^2/GMd a^2/d r and 𝒢̃:= 1 - a^2/^2,one can write Eq. (<ref>) in a more compact way: r^2 ( 1 - a^2/^2) d/d r=GM (Γ_rad - 1 ) + 2 a^2 r - r^2 d a^2/d r r^2  𝒢̃d/d r= - GM ℱ̃ d/d r= - g/ℱ̃/𝒢̃. The quantities ℱ̃ and 𝒢̃ are dimensionless and therefore ideal for visualizations. We note that in the subsonic regime, that is, ≪ a, we obtain 𝒢̃→ -a^2/^2 and thus Eq. (<ref>) reduces to d/d r = g /a^2 ℱ̃,which is a form of the hydrostatic equation discussed in a previous paper <cit.>.§ THE CRITICAL POINTThe crucial difference between Eq. (<ref>) and the full hydrodynamic equation (<ref>) is the denominator 𝒢̃. In contrast to the quasi-hydrostatic case, the hydrodynamic equation has a critical point at = a, that is, where the denominator 𝒢̃ becomes zero. In order to allow for a finite solution for the velocity gradient at the corresponding radius r_c, the nominator ℱ̃ must also vanish at exactly this point. This leads to the constraint ℱ̃(r_c) !=𝒢̃(r_c) != 0 . As will be discussed below, this constraint implies a fixing of the mass-loss rate Ṁ, even though this quantity does not appear explicitly in the hydrodynamic Eq. (<ref>). Since the hydrostatic equation does not have this constraint, it can provide a solution for (r) for anynon-vanishing value of Ṁ.An illustration of ℱ̃(r) and 𝒢̃(r) for a converged hydrodynamically consistent model is shown in Fig. <ref>. Unlike in the CAK-type approaches, the critical point in Eq. (<ref>) and Fig. <ref> isidentical to the sonic point, potentially corrected for a turbulence contribution. This is a direct consequence of our approach where we do not assume any semi-analytical expression for the radiativeacceleration a_rad. Instead, we simply treat a_rad(r) as a given quantity that can be expressed as a function of radius. Of course a_radalso reacts on changes of the velocity field and the mass-loss rate, but instead of trying to parametrize this in analytical form, we use an iterative approach and recalculate a_rad(r) after any adjustment of (r) or Ṁ until the thereby calculated acceleration does not enforce further velocity or mass-loss rate updates. The PoWR code has already been used in the past for hydrodynamically consistent calculations of WR stars by <cit.>. In their approach, which turned out to be suitable only for thick-wind WR stars, the critical point was notidentical to the sonic point due to a semi-analytical approach where the radiative acceleration was described with the help of an effective force multiplier parameter α(r). In the present work, we do not use such a parametrization and therefore the critical point in our hydrodynamic equation is identical to the sonic point. Theimplementation of the new method is also conceptually different from the one described in <cit.> and will be further outlined in Sect. <ref>.§ POWR§.§ Basic conceptsFor the Potsdam Wolf-Rayet (PoWR) model atmospheres we assume a spherically symmetric atmospherewith a stationary mass outflow. In order to properly describe the situation of an expandingatmosphere without the LTE approximation, theequations of statistical equilibrium and radiative transfer have to be solved iteratively until a consistentsolution for the radiation field and the population numbers are obtained. In addition, thetemperature stratification is updated iteratively to ensure energy conservation in the expandingatmosphere. This is performed using the improved Unsöld-Lucy method described in <cit.>or alternatively via the electron thermal balance which has recently been added to thePoWR code <cit.>.If then all changes to the population numbers are smaller than a defined threshold, the atmosphere model is considered to be converged and the synthetic spectrum is calculated using a formal integration in the observer's frame. The iron group elements are treated in a superlevel approach: The levels are grouped into energy bands, which are then represented by superlevels. While we assume LTE for the relative occupations of the individual levels inside a superlevel, the superlevels themselves are treated in full non-LTE. The detailed cross-sections for the superlevel transtions have been prepared on a sufficiently fine frequency grid prior to the model iteration and contain all the individual transitions to ensure that radiative transfer treats each of these transitions at their proper frequency. <cit.>. PoWR is furthermore able to account for wind inhomogeneities in the so-called “microclumping” approximation <cit.>. In the calculation of the formal integral, PoWR can also account for optically thick clumps in an approximate way,see <cit.> for details.The radiative transfer is calculated in the CMF, thereby implicitly accounting for multiple scattering and avoiding all simplifications, which are done in the faster but more approximate concepts used, for example, in time-dependent calculations. In particular, the solution is obtained by solving the momentequations via a differencing scheme based on the concepts of <cit.>. Thevariable Eddington factors required in this scheme are obtained from a so-called “ray-by-ray solution” wherean angle-dependent radiation transfer is performed using short-characteristics integration <cit.>. The CMF radiative transfer requires a strictly monotonic velocity field. Even in a stationary wind, this might not always be the case and therefore in some cases the solution for (r)resulting from the hydrodynamic equation cannot be applied. This will be discussed in more detail in a future paper and does not apply to the models presented in this work.In the case of stars with low or moderate log g it is sufficient to assume that the intrinsic line profiles are Gaussians with a constant Doppler broadening velocity _dop during the CMF calculations while in the formal integral, where the emergent spectrum is eventually obtained, detailed thermal,microturbulent, and pressure broadening are accounted for with their depth dependence. The necessary atomic data required for our calculations are taken from a variety of sources, combining<cit.>, <cit.>, <cit.>, the opacity project <cit.>, the Kurucz atomic database[], the NIST atomic database[], private communication with K. Butler, and several minor sources listed in <cit.>. For argon, which turns out to be one of the more important elements driving the outer wind of our demonstration model, we use opacity project data combined with level energies from the NIST atomic database. The iron group elements, which aretreated as one generic element with the help of the superlevel approach described in detail in <cit.>, are modeled using Kurucz data if available (usually up to ionization stage X),while opacity project data are applied to also cover the higher ions. The collisional cross-sections are described with different formulae depending on the element and ion, most notably from <cit.>, K. Butler (priv. comm.), and <cit.>. The latter is also used for all collisional transitions of argon and the iron group superlevels if the corresponding radiative transition is allowed. The collisional cross-sections of forbidden transitions are mostly approximated by <cit.>, though for very few ions a more specialized treatment is applied <cit.>. For the bound-free transitions, we use <cit.> for all collisional ionizations while we branch between OP data fits, <cit.> and <cit.>, depending on the ion for the photoionization cross-sections. The hydrogenicapproximation <cit.> is widely used as fallback, which also applies for Ar and the iron group.§.§ Model parametersPoWR model atmospheres can be specified by a set of fundamental parameters. These are:* The chemical abundances of all considered elements, typically given as mass fractions X_i. * Two out of the three quantities connected by Stefan-Boltzmann's law, namely: * The stellar radius R_∗, defined at a specified Rosseland continuum optical depth τ_∗ (default: τ_∗ = 20). * The effective temperature T_∗ at the radius R_∗.* The luminosity L_∗ = 4 π R_∗^2 σ_sb T_∗^4. * The stellar mass M_∗, either given explicitly via input of M_∗ or log g, orcalculated from the luminosity if the stellar mass is not given otherwise. In the latter case, depending on the stellar type, the mass-luminosity-relations from <cit.> or <cit.> are used.* The mass-loss rate Ṁ or an implying quantity (see below).* The terminal wind velocity _∞ and the wind velocity law (r), directly implying the density stratificationvia the continuity Eq. (<ref>).* The clump density contrast D(r) = ρ_cl(r)/ρ(r) = f_V^-1(r) <cit.>. As an alternative to the mass-loss rate Ṁ, one can also specify a line emission measure in the form of either the transformed radiusR_t := R_∗[ _∞/2500 km/s/ Ṁ√(D)/10^-4 M_⊙/yr. ]^2/3 ,<cit.> or the wind strength parameterQ_ws := Ṁ√(D)/(R_∗_∞)^3/2 ,<cit.>. Since all other quantities in Eqs. (<ref>) and (<ref>)have to be specified anyhow, these quantities imply a certain value of Ṁ. Using R_t or log Q_ws can be helpful when calculating model grids or searching for models with a similar emission line strength in their normalized spectra.In hydrodynamically consistent models, Ṁ and (r) are adjusted in order to ensure that the hydrodynamic equation is fulfilled throughout the atmosphere. However, as they also define the density stratification, starting values are still required as an input for the calculations. Depending on the starting model, the resulting Ṁ and (r) of a converged hydrodynamically consistent model can differ significantly from their initial specifications.§.§ Iteration scheme With the main concepts and parameters given in the previous paragraphs, the overall iteration scheme for a PoWR model can now be summarized by the following steps:* Model start: Setup of radius and frequency grids, first velocity stratification, start approximation forthe population numbers n⃗(r) and the radiation field J_ν(r)* Main iteration * Solution of the radiative transfer in the comoving frame* Temperature corrections (if necessary)* Solution of the statistical equations* (optional:) Solution for the hydrostatic or hydrodynamic equation to update the velocity/density stratification * Formal integration: Calculation of the emergent spectrum in the observer's frame For efficiency, we use the method of variable Eddington factors, where the Eddington factors f_ν and g_ν, which have to be obtained from the more costly ray-by-rayradiative transfer, are only updated every few iterations while otherwise only the momentum equationsare solved in order to obtain the new radiation field and the radiative acceleration. For convenience, we schedule stratification updates immediately before the next renewal of the Eddington factors. A sketch of this iteration scheme is given in Fig. <ref>. The stratificationupdate, which can be either restricted to the quasi-hydrostatic part as outlined in <cit.>or the full hydrodynamical update described in this work, is fully integrated into the main iteration. The particular details of the hydrodynamic stratification updateare discussed in Sect. <ref>.§ HYDRODYNAMICALLY CONSISTENT MODELS§.§ Start approximationIn order to integrate the hydrodynamic equation in the form of Eq. (<ref>), the quantities a(r) and a_rad(r) have to be specified as a function of radius. This cannot be done from scratch and therefore a starting approximation for the stellar atmosphere has to be given, including a velocity stratification. Unless the new model is only a small variation of an already existing hydrodynamically consistent model, where one could employ the old velocity field, usually a model with a β-law connected to a consistenthydrostatic solution <cit.> is adopted as a starting approximation. For the mass-loss rate, it has turned out to be helpful, if at least the global energy budget is close to consistency. To obtain thisbudget, the hydrodynamic equation is written in the formd/d r + G M_∗/r^2=a_rad - 1/ρd P/d r,and then integrated over r and multiplied with Ṁ:Ṁ∫( d/d r + G M_∗/r^2) d r = Ṁ∫(a_rad - 1/ρd P/d r) d r L_wind= W_wind.This last equation (Eq.<ref>) describes the balance between the modeled wind luminosity L_wind and the provided power W_wind. Dividing Eq. (<ref>) by L_wind yields the so-called work ratioQ := W_wind/L_wind.While stellar atmosphere models with Q < 1 do not provide a radiative acceleration that is sufficient to drive the wind, models with Q > 1 exhibit a radiation acceleration that could actually drive a strongerwind. Models with Q = 1 exactly supply the power that is required to drive the wind. The corresponding models are therefore consistent on a “global” scale, as they fulfill the integrated form of the hydrodynamicequation. Although they usually do not fulfill this equation locally, models with Q ≈ 1 are usuallywell suited as a starting model for the full hydrodynamic calculations. A similar approach to identify proper start approximations has been used by <cit.> in theircalculation of a hydrodynamically consistent WC atmosphere. §.§ Obtaining a consistent velocity field With a given starting model, all terms in the hydrodynamic equation are known, including the radiative acceleration a_rad(r) as a function of radius, and the terms ℱ̃(r) and 𝒢̃(r) can be calculated. Since the ratio of the latter two essentially defines the right hand side of Eq. (<ref>), both terms have to vanish at exactly the same radius r_c, defining the critical point of the equation. For the non-consistent starting model, this is generally not the case and thus the radius r_𝒢̃ := r(𝒢̃ = 0) will differ from the radius r_ℱ̃ := r(ℱ̃ = 0). In some cases, ℱ̃ can become zero at more than one point, thus indicating a non-monotonic solution for (r). While this does not prevent the integration of the hydrodynamic equation, a non-monotonic (r) cannot be used inthe CMF radiative transfer and thus such cases are discarded at the moment.Assuming that there is only one radius r_ℱ̃ at which we have ℱ̃ = 0,this point acts as the current candidate for the critical point. Starting at r_ℱ̃ with (r_ℱ̃) = a(r_ℱ̃), Eq. (<ref>) is then integrated inwards and outwards to obtain the new velocity field. By setting the wind velocity to the current value of a at r_ℱ̃, the critical point condition is automatically fulfilled and we achieve a velocity field that smoothly passes through the critical point. In principle it would also be possible to start the integration at r_𝒢̃ or any point in-between r_ℱ̃ and r_𝒢̃, but this would require a modification of r_ℱ̃ during the hydrodynamic iterationin order to fulfill the critical point condition. Several approaches have been tested during the development phase and none of them turned out to be favorable. Essentially, the approach by <cit.> made use of modifying ℱ̃ when changing the mass-loss rate. While this worked fine for some WR models, it turned out to fail for OB models. Furthermore, their approach required the calculation of a force multiplier parameter α(r),which can only be obtained by a modified radiative transfer calculation, thereby essentially doubling the calculation times for the radiative transfer before each hydro stratification update. On the other hand, modifications of the Γ_rad-term in ℱ̃ without any prediction on how the radiative acceleration will react on changes of (r) or Ṁ turn out not to be precise enough. Thus the direct integration from r_ℱ̃ proved to be the best method, both in terms of stability and performance.§.§ Calculation of the mass-loss rate By starting the integration of the velocity field outwards from the critical point of the hydrodynamic equation, one automatically obtains the terminal velocity _∞ when reaching the outer boundary, since_∞≈(R_max) as long as R_max is chosen to be sufficiently large. This method so far provides a new velocity field that fulfills the hydrodynamic equation, but does not performany update of the mass-loss rate Ṁ. This is already a powerful tool, but the major issue with such a solution is their inconsistency with some of the initial stellar parameters; since the integration starts from the critical point, also the inner boundary value _min := (R_∗) is not fixed, but instead obtained from integrating the hydrodynamic equation. As long as r_ℱ̃ and r_𝒢̃ are not identical before the integration, the total optical depth at R_∗ will change after the velocity update. This especially means that T_∗ and the corresponding R_∗ in a converged model would refer to a different optical depth than in the starting model. As long as the total optical depth is larger than the old one, one could infer the values for the original optical depth. However, the consequencewould be that the obtained hydrodynamic model refers to a different temperature and radius than the starting model, thereby being inconsistent with the radiative transfer calculation.This problem can be solved by introducing another constraint, namely the conservation of the total optical depth. More precisely, for practical reasons we demand the conservation of the total Rosseland continuum optical depth, which we denote as τ_Ross(R_∗)throughout this work. As a consequence, the mass-loss rate Ṁ needs to be updated, but the main model parameters T_∗ and R_∗ now keep their intended reference. Unfortunately, finding a proper updatemethod for Ṁ is not a trivial task. A simple approach would be to use the definition of the optical depth and replacethe density via Eq. <ref> to obtain an expression that explicitly contains Ṁ:τ_Ross(R_∗) = ∫_R_max^R_∗κ_Ross(r) d r= ∫_R_max^R_∗ρ(r) ϰ_Ross(r) d r=Ṁ/4π∫_R_max^R_∗ϰ_Ross(r)/r^2 (r)d r. However, even though Eq. (<ref>) seems like a straight-forward approach to extract the density and thus the mass-loss rate, one must keep in mind that ϰ_Ross(r) is not generally depth-independent, as some of the contributions (e.g., the bound-free opacities) do not just have a linear dependence on ρ(r). In fact, using this expression leads to large changes in Ṁ, often over-predicting the required changes by orders of magnitude. Since the critical point tends to change significantly even for moderate Ṁ updates, this method can only be successfully applied in very few cases and thus cannot be considered as a standard approach. The sensitivity of the critical point was already found by <cit.> whenthey used an early form of this concept with the Thomson opacity instead of the Rosseland continuum opacity as theydid not account for the free-free and bound-free continuum in their radiative force. Similar problems occur when employing other quantities with analog descriptions, such as the integrated density without the mass absorption coefficients_max = ∫_R_max^R_∗ρ(r) d r = Ṁ/4π∫_R_max^R_∗1/r^2 (r)d r ,or the total atmosphere mass M_atm = 4π∫_R_∗^R_maxρ(r) r^2 d r = Ṁ∫_R_∗^R_max1/(r)d r. In order to obtain a more stable method, this work utilizes a completely different approach, where we approximate the response of the radiative acceleration a_rad to a change of the mass-loss rate by a factor f. A typical example for an O-star model is shown in Fig. <ref>, where the ratio of unmodified to modified acceleration is shown for different values of f^-1. As the radiative acceleration is defined asa_rad(r) = 4π/c1/ρ(r)∫_0^∞κ_ν H_νdν , = 16π^2/cr^2 (r)/Ṁ∫_0^∞κ_ν H_νdν, there is a leading dependence with Ṁ^-1, therefore making f^-1 the more interesting quantity for the plots. The lower panel of Fig. <ref> also illustrates that the particular value of f changes the amplitude, but not the general behavior of the response, since one can scale all the results almost perfectly to the same curve resp(r) usingthe relationresp(r) = 1/1-f[ 1 - a_rad(Ṁ)/ f · a_rad(f ·Ṁ)].With the help of Eq. (<ref>) it would therefore be possible to implement the detailed response of a_rad(r) to suggested changes of Ṁ in order to improve the calculation of the mass-loss rate in a hydro iteration. However, similar to what has been discussed for the α(r)-approach from <cit.>, thiswould double the CMF calculation time before each update of the velocity field. Furthermore, the relation (<ref>) cannot account for the typically more complexchanges of (r) and thus even a mass-loss rate obtained in this detailed way does not lead to a better model convergence.In fact, such methods have been tested and have turned out not to be better than the more approximate way described below.Apart from calculating the total response of the radiative acceleration in our test calculations, we also calculated the isolated response of the line and the total continuum term. In Fig. <ref> the results for two different mass-lossmodification factors f are shown. As we can see, the continuum shows only a very small response that can be neglected, while the lineresponse is stronger and roughly of the order f^-1. In fact f^-1 is never completely reached, especially not in the outer wind, but since we want to use our approximation only for the calculation of the mass-loss rate, this is not a problem. By slightly over-predicting the effect of the change in Ṁ, we avoid potential “overshooting” of the correction. We therefore assume for our calculations of the mass-loss rate update, that the radiative acceleration a_rad changes for a mass-loss rate modified by the factor f such thata_rad(f ·Ṁ) = 1/f a_lines(Ṁ) + a_cont(Ṁ) ,with a_rad(Ṁ) = a_lines(Ṁ) + a_cont(Ṁ) denoting the acceleration with the original mass-loss rate. Using this assumption, we calculate the factor f which would be necessary to obtain ℱ̃ = 0 at the current r_𝒢̃. For a converged model, this is automatically fulfilled and we obtain f = 1, that is, no more change in Ṁ. In the general case, the condition ℱ̃(f Ṁ) != 0 leads tof = Γ_lines(r_𝒢̃)/ 1 -Γ_cont(r_𝒢̃) - 2 a(r_𝒢̃) · r_𝒢̃/GM[ a(r_𝒢̃) - r_𝒢̃.da/dr|_r=r_𝒢̃] ,with Γ_lines = a_lines/g and Γ_cont = a_cont/g. To avoid overly large corrections, which could significantly disturb the model convergence, only 50% of the calculated change for Ṁ is usually applied in one iteration.The described update of the mass-loss rate now leads to a convergence of r_ℱ̃ and r_𝒢̃. However, there is so far no guarantee that the total optical depth of the converged model will be identical to that of the starting model. To ensure also the latter, the whole atmosphere stratification is radially adjusted before integrating the hydrodynamic equation. This adjustment is performed already with the new mass-loss rate and ensures the conservation of the total Rosseland continuum optical depth. §.§ Scheme of the hydrodynamic stratification updateThe implementation concept of the hydrodynamical stratification update in regards to the overall model iteration is similar to the one described in <cit.> for models with a consistent quasi-hydrostatic part. In fact, the full hydrodynamic treatment and quasi-hydrostatic update are alternative branches for the step (2d) in the overall iteration scheme outlined in Sect. <ref>. In order to ensure that the overall model calculations are not vastly disrupted by a stratification update, such updates arenot performed during each iteration, but only immediately before the Eddington factors in the following radiative transfer job are recalculated and if the overall corrections to the population numbers are below a certain prespecified level (in addition, it is also possible to require a certain level of flux consistency for a stratification update).The hydrodynamic stratification update itself can be described by the following scheme:* Check whether the hydrodynamic equation is fulfilled at all depth points: If so, no update needs to be performed.* The quantities Γ_rad(r) and a(r) are calculated based on the current model stratification. * If r_ℱ̃≠ r_𝒢̃, the mass loss rate Ṁ is updated by a factor f as described in Eq. (<ref>)* Iteration: * Starting from (r_ℱ̃) = a(r_ℱ̃), the new velocity field is obtained via integrating Eq. (<ref>) with a fourth-order Kunge-Kutta method using adaptive step sizes. The quantities ℱ and 𝒢 are calculated on the fly. To avoid numerical issues near the critical point, we make use of l'Hôpital's rule. As the hydrodynamic integration requires a resolution below the regular depth grid, we use spline interpolation to obtain the interstice values for Γ_rad and a. * With the new (r) now given, we calculate the resulting new density stratification and total Rosseland continuum optical depth τ_Ross(R_∗)* If the τ_Ross(R_∗) is conserved, the iteration ends. Otherwise Γ_rad(r) and a(r) are shifted radially andthe next iteration cycle is started. * If necessary, the depth grid spacing is updated to better reflect the new stratification. All necessary quantities are interpolated from the old to the new grid. After the hydrodynamic stratification update is complete, the overall iteration cycle continues with the next radiative transfer calculation (we refer also to Sect. <ref> for details of the overall iteration). Due to the inclusion in the overall iteration cycle, the values of Γ_rad(r) and a(r) used in the next hydrodynamic stratification update implicitly include all effects of the former stratification and mass-loss rate update. While this procedure can significantly increase the total number of overall iterations compared to non-HD models, this iterative approach allows us to refrain from any (semi-)analytical assumptions for the radiative acceleration, making this method essentially applicable for the whole range of stars that can be described by PoWR atmosphere models. The atmosphere model is eventually considered to be converged if all of the following requirements are met: * The relative corrections to the population numbers are below a certain level (typically 10^-3).* Flux consistency is achieved within a certain accuracy (typically setting: relative departures may not be larger than 10^-2).* The hydrodynamical equation is fulfilled throughout the whole atmospherewithin a specified accuracy (typically 5%, but we allow larger deviationsat the inner and outer boundary).With the exception of the last point of course, these criteria are the same as for non-HD models, which are used for reproducing observedspectra and empirically obtain stellar and wind parameters. Based on the converged atmosphere model, the emergent spectrum is subsequentlycalculated in the observer's frame, allowing us to cross-check the results with observed spectra. § RESULTSIn order to test whether or not our new method is applicable to OB stars, where the approach from <cit.> failed, we calculated a hydrodynamically consistent model for the well-studied O supergiant ζ Pup/HD 66811. Starting from the parameters given in <cit.>, we first calculated a standard PoWR model using a prescribed β-law connected to a consistent quasi-hydrostatic part as described in <cit.>. While a model reproducing most spectral features already requires Ne, Mg, Si, P, and S to be considered, the work ratio of such a model is only Q = 0.74. By adding further elements, most notably Ar, the work ratio was close to unity and the model could be used as a starting approach for the hydrodynamic calculations. In our first approach, we applied the same clumping stratification as <cit.>, that is, depth-dependent clumpingwith a maximum value of D_∞ = 20 or f_V,∞ = D^-1_∞ = 0.05 and no interclump medium. This isa standard approach in state-of-the-art atmosphere models for hot and massive stars <cit.> and allows one to calculate the population numbers for theclumped wind, which has a density increased by a factor D(r) compared to a smooth wind. For the radiative transfer, on the other hand, one can average between the clump and interclump medium as clumps are assumed to have a small size in comparison to the mean-free path of the photons. Furthermore, instead of thestandard description of depth-dependent clumpingin PoWR <cit.>, we employed the same parametrization as in the CMFGEN modelfrom <cit.>, namelyf_V(r) = f_V,∞ + (1 -f_V,∞) ·exp(-(r)/_cl) ,introduced in <cit.>, where the clumping “onset” is described by a velocity _cl. In their analysis for ζ Pup, <cit.> use _cl = 100km/s. We started with a similar stratification, but quickly realized that the value for _cl is not sufficient and leads to solutions with an overly high terminal velocity together with an overly low mass-loss rate. Stratifications where we set _cl = 0.5 _sonic lead to better results, but since the sonic point can change during the iterations, we decided to implement another clumping stratification withf_V(r) = f_V,∞ + (1 -f_V,∞) ·exp(-τ_cl/τ_Ross(r)),that is, where we specify the clumping onset via an optical depth τ_cl instead of a particular velocity. This approach was eventually applied in the final model presented in this work. As we discuss later on, it was furthermore necessary to reduce the maximum clumping value to D_∞ = 10 in our model. The complete set of input parameters for the final hydrodynamical model is compiled in Table <ref>. In models with a predefined velocity law in the wind part, it is usually sufficient to include only those elements that can either be seen in the spectrum or contribute significantly to the blanketing. However, for the hydrodynamic models, it is essential to include all ions that have a significant contribution to the radiativeforce, even if they neither leave a noticable imprint in the spectrum nor significantly affect the blanketing. Yet, accounting for all elements and their ions from hydrogen up to the iron group would be numerically extremelyexpensive and thus practically impossible. Fortunately, various elements and ions are only important in a certain parameter regimeand thus can be neglected outside of these. A list of the ions considered in the model presented in this work is given in Table <ref>.The acceleration balance of the hydrodynamically consistent model is shown in Fig. <ref>. Throughout the atmosphere, an excellent agreement between the outward and inward forces is obtained. Up until the critical point, not only the line acceleration and the Thomson term are important, but there are also significant contributions from the gas pressure and the true continuum, that is, the continuum not produced by Thomson scattering, to the driving. In the wind part, both of the latter terms become negligible. However, it has to be noted that this is not necessarily the case for all kinds of hot stars. In the more dense Wolf-Rayet winds, situations can occur where the true continuum is not negligible in the wind <cit.>. The importance of the pressureterm strongly depends on the assumptions for microturbulence. In this model, a constant value of_mic = 15km/s was used in the hydrodynamic calculations. When using larger values or depth-dependent descriptions with _mic increasing outwards, thea_press-term can become significant again in the outer wind.The resulting velocity field for the converged hydrodynamic models is shown in Fig. <ref>, where it is compared to the stratification of two standard models using a prescribed velocity fieldin the form of so-called β-laws, that is, (r) = _∞( 1 - R_∗/r)^β.When implementing the β-law into a stellar atmosphere model, Eq. (<ref>) is usually slightly modified due to several reasons, most prominently the necessity to connect the wind domain with a proper quasi-hydrostatic domain <cit.> and the numerical issues that would occur for (R_∗) = 0. This modification can be done in more than one way and thus also differ between different stellar atmosphere codes. In PoWR, two ways of “fine parametrization” for the β-law are available, namely(r) = p ( 1 - R_∗/r + R_s)^βand(r) = p ( 1 - f_sR_∗/r)^β ,with their fine parameters p and R_s or f_s, respectively. While R_s or f_s are responsible for a proper connection of the wind and the quasi-hydrostatic domain, p ≈_∞ ensures that the specified terminal velocity is reached at the outer boundary. All fine parameters are automatically calculated depending on the choices of the velocity field, the connection criterion and whether the parametrization from Eq. (<ref>) or from Eq. (<ref>) should be used. For the hydrodynamically consistent model presented in this work, there is of course no prescribed wind velocity field, but for the comparison calculations we had to make a choice and used Eq. (<ref>) since it is more widely used in modern PoWR models.Interestingly the velocity field in the lower part of the wind, just above the sonic point, can be approximated with a beta law using β = 2.4, but the outer part of the windis best matched with β = 0.9. Due to the fact that the fine parametrization is not unique as described above, these deduced β-law approximations can vary slightly (10 to 20%) when using differentfine parametrizations. In-between the two parts there is a steep increase of the velocity, steeper than could be modeled by a β-law connected to the quasi-hydrostatic part. The reasons for this kind of velocity field are revealed when looking at the particular contributions to the radiative acceleration plotted in Fig. <ref>. Around and shortly above the critical point, only the iron group elementsand the electron scattering contribute significantly to the radiative acceleration. In contrast, further outwards,many more elements contribute, and for r > 1.6 R_∗ N, O, and Ar start to exceed not only Γ_e, but also the contribution of the iron group elements. S, Cl, and Ne follow further out and forr ≳ 2 R_∗ also the contributions of C and P are comparable to Γ_e, having even a bit more impact than the iron group at this distance.The complex contribution to the radiative force from the various elements is directly imprinted in the resulting velocity field and thus “naturally” explains the deviations from a standard β-law which has been derived from a taylored fit of the UV resonance line profiles already by <cit.>. While we do not see a noticeable velocity plateau in the final model, such a plateau occurred in several of the test models derived during the preparation of this work. The plateau becomes visible if the iron group contribution already exceeds Γ_e around or even below the sonic point, which can already happen for slightly higher mass-loss rates than derived for ζ Pup here.The derived parameters of our hydrodynamically consistent model are compiled in Table <ref>, while the spectral energy distribution and important parts of the normalized spectrum are compared to observations in Fig. <ref>. The UV observation was obtained with the IUE satellite (SWP15296)while the optical spectrum stems from an earlier observation within our group (Hamann, priv. comm.). The photometric data used in the SED plot have been taken from <cit.>. While the displayed model is rather to demonstrate the new technique described in the previous section and therefore has not been fine-tuned to precisely reproduce the spectral featuresof ζ Pup, one can still compare the main parameters to spectral analyses. While the starting parameters were motivated by the non-hydrodynamical model results from <cit.>, their high clumping factor of 20 would lead to an underprediction of the Hα electron scattering wings and thus was reduced to the more typical value of 10. The mass-loss rate of logṀ = -5.8 is slightly lower than in <cit.>, but higher than the value obtained by <cit.>. The emergent spectrum of our hydrodynamical model shows all the typical features of an early Of-type star with a fast wind and strong emission in both Niii λ 4634-40-42 and Heii λ 4686 <cit.>.When comparing the detailed spectral appearance to the observation of ζ Pup inFig. <ref>, one can conclude that the UV spectrum, including the iron forest, is well reproduced apart from the precise shape of the nitrogen profiles, which can be affected by various parameters including so-called “superionization” due to X-rays, which were not included in our model. The optical spectrum reveals that the mass-loss rate might be slightly too high to reproduce this observation, since emission in Hα and the other prominent emission lines appears slightly too strong. Also Hβ and Hγ seem to be filled up by wind emission. The overall appearance, however, as well as the spectral energy distribution, are nicely reproduced, illustrating that we have a realistic stellar atmosphere model that would require only minor parameter adjustments to allow for a more detailed discussion. Unfortunately even these minor changes can lead to a significant amount of calculation effort when constructing hydrodynamically consistent models, which is why we refrain from further efforts in this introductory paper.§ CONCLUSIONSIn this work we constructed the first hydrodynamically consistent PoWR model foran O supergiant. A new method for the consistent solution of the hydrodynamic equation, together with the solution of the statistical equations, the temperature stratification, and the radiative transfer has been developed and successfully applied. This new technique enables us to construct a new generation of PoWR models where the velocity field and the mass-loss rate arecalculated consistently.To obtain the velocity field, the hydrodynamic equation is integrated inwards and outwards from the critical point. Since we provide the radiative acceleration calculated in the comoving frame as a function of radius, the critical point in our hydrodynamic equation is identical to the sonic point.The uniqueness of the critical point also provides the necessary condition to obtain the mass-loss rate.As we calculate the velocity field from the hydrodynamic equation, it is mandatory to include all ions that significantly contribute to the radiative acceleration somewhere in theatmosphere. In the case of our demonstration model, especially the inclusion of Ar was crucial as it provides a major contribution to the driving in the outer wind, comparable to N and O, although it does not leave detectable features in the spectral ranges typically observed.In the region around the critical point, the most important line driving contribution stems from the iron group elements. Although these elements turn out to be important contributors throughout the whole atmosphere for our demonstration model, there are several other elements exceeding their input in the outer part, namely N, O, S, Ar, Cl, C, P and partly Ne. Further follow-up calculations for a wider parameter range will be necessary to shed light on details; namely, which ions are responsible, and how this picture will change when transitioning to different mass-loss or temperature regimes.The obtained velocity field cannot be approximated by a β-law.The resulting mass-loss rate of our hydrodynamic model is in the range of what has been determined by empirical analyses for the O4 supergiant ζ Pup, and the resulting spectrum resembles the observedline spectrum and the spectral energy distribution (cf. Fig. <ref>). Our calculations confirm that the value of the mass-loss rate crucially depends on the location of the critical point, which in turn reactsto several factors, such as the assumed microturbulence, the onset of clumping, and the Fe abundance. Follow-up research will therefore be required in order to study the precise influence of these and other parameters. We would like to thank the anonymous referee for the fruitful suggestions that helped to improve this paper. We would also like to acknowledge helpful discussions with D. John Hillier. The first author of this work (A.S.) is supported by the Deutsche Forschungsgemeinschaft (DFG) under grant HA 1455/26. T.S. is grateful for financial support from the Leibniz Graduate School for Quantitative Spectroscopy in Astrophysics, a joint project of the LeibnizInstitute for Astrophysics Potsdam (AIP) and the Institute of Physics and Astronomy of the University of Potsdam. This research made use of the SIMBAD and VizieR databases, operated at CDS, Strasbourg, France.aa
http://arxiv.org/abs/1704.08698v1
{ "authors": [ "Andreas A. C. Sander", "Wolf-Rainer Hamann", "Helge Todt", "Rainer Hainich", "Tomer Shenar" ], "categories": [ "astro-ph.SR", "astro-ph.GA", "astro-ph.IM" ], "primary_category": "astro-ph.SR", "published": "20170427180007", "title": "Coupling hydrodynamics with comoving frame radiative transfer: I. A unified approach for OB and WR stars" }
Given k∈, we study the vanishing of the Dirichlet seriesD_k(s,f):=∑_n≥1 d_k(n)f(n)n^-sat the point s=1, where f is a periodic function modulo a prime p. We show that if (k,p-1)=1 or (k,p-1)=2 and p≡ 3 4, then there are no odd rational-valued functions f≢0 such that D_k(1,f)=0, whereas in all other cases there are examples of odd functions f such that D_k(1,f)=0.As a consequence, we obtain, for example, that the set of values L(1,χ)^2, where χ ranges over odd characters mod p, are linearly independent over .[2010]11M41, 11L03, 11M20 (primary), 11R18 (secondary)Propagating elastic vibrations dominate thermal conduction in amorphous silicon Austin J. Minnich December 30, 2023 ===============================================================================§ INTRODUCTIONLet p be prime and let K be a number field. For a function f:→ K which is periodic modulo p, let L(s,f) be the Dirichlet seriesL(s,f):=∑_n= 1^∞f(n)/n^s,which is absolutely convergent for (s)>1. Since L(s,f)=p^-s∑_a=1^pf(a)ζ (s,a/p), where ζ(s,x) is the Hurwitz zeta-function which is meromorphic inwith a pole of residue 1 at s=1 only, one has that L(s,f) admits meromorphic continuation towith (possibly) a simple pole at s=1 only of residue (f) with (f):=1/p∑_apf(a).In particular, if (f)=0 then L(s,f) is entire.In the papers <cit.> Chowla asked whether it is possible that L(1,f)=0 for some rational-valued periodic function f satisfying (f)=0 and with f not identically zero. Following an approach outlined by Siegel, Chowla solved the problem in the case where f is odd by showing that in this case L(1,f) is never zero. Later, Baker, Birch and Wirsing <cit.> used Baker's theorem on linear forms in logarithms to give a complete answer to Chowla's question showing that L(1,f)≠0 whenever K∩(ξ_p)=, where ξ_n:=1/n with x:=e^2π i x. In the following years Chowla's problem was considered and generalized by several other authors, for example we mention the work of Gun, Murty and Rath <cit.> where other points besides s=1 were considered (and where the condition on K was slightly relaxed) and the works of Okada <cit.> and of Chatterjee and Murty <cit.>, who gave equivalent criteria for the vanishing of L(1,f) when no condition on K is imposed. See also <cit.> for a variation of the proof of the result by Baker, Birch and Wirsing.In this paper we consider the analogue of Chowla's problem forD_k(s,f):=∑_n=1^∞d_k(n)f(n)/n^s=∑_n_1,…,n_k=1^∞f(n_1⋯n_k)/(n_1⋯n_k)^s,where d_k(n):=∑_m_1⋯ m_k=n1. As for L(s,f), D_k(s,f) is absolutely convergent for (s)>1 and, expressing each of the series in the second expression for D_k in terms of Hurwitz zeta-functions, one obtains analytic continuation for D_k(s,f) to ∖{1}. In the case where k>1, the analyticity of D_k at s=1 is equivalent to having (f)=0 and f(0)=0 (see Lemma <ref>). Notice that if f is odd, then both conditions are automatically met. If f is not odd, then one can easily see that D_k(1,f)≠0 by appealing to Schanuel's conjecture. We remind the reader that Schanuel's conjecture predicts that for any z_1,…,z_n∈ which are linearly independent overthe transcendence degree of (z_1,...,z_n, e^z_1,...,e^z_n) overis at least n. Let p≥3 be prime and let k∈. Let f:→ be p-periodic with f(0)=(f)=0. Then, under Schanuel's conjecture we have that if D_k(1,f)=0 then f is odd.Proposition <ref> is an easy consequence of the fact that for χ odd L(1,χ)/π is an algebraic number whereas π and the values L(1,χ), as χ ranges over even non-principal Dirichlet character mod p, are known to be algebraically independent under Schanuel's conjecture. In fact the full Schanuel's conjecture is not needed here, an analogue of Baker's theorem for linear forms in k-th powers of logarithms would suffice for Proposition <ref>.Thus, at least conditionally, to determine whether D(1,f) can be zero we just need to consider the case of f odd. The case (k,p-1)=1 is completely analogous to the case k=1 and one has that D_k(1,f)≠0 if K∩(ξ_p)=. If (k,p-1)>1 then the situation changes drastically and already for k=2 and p=5 we can find non-trivial functions f such that D_2(1,f)=0. Indeed, if f is the odd 5-periodic function such that f(1)=1, f(2)=-2, then D_2(1,f)=0. Indeed,∑_n∈n≡15d(|n|)/n=2∑_n∈n≡25d(|n|)/n=4π^2/25√(5)(cf. (<ref>) below), where the sums have to be interpreted as the limits as X→∞ of their truncations at |n|≤ X.Similarly, if f is the odd 13-periodic function such that f(1)=18a,f(4)=18b, f(3)=18c f(2)=19a + 11b + 4c, f(8)=-4 a + 19 b + 11 c, f(6)=-11 a - 4 b + 19 cfor any a,b,c∈, then D_2(1,f)=0. Notice the pattern and that the ordering we chose is not casual: indeed mod 13 we have (2^0,2^2,2^4)≡ (1,4,3) and (2^1,2^3,2^5)≡ (2,8,6), with 2 a primitive root mod 13. The above examples are far from being unique. Indeed, if (k,p-1)>1, then one has no non-trivial solutions to D_k(1,f)=0, with f:→ odd and periodic mod p, if and only if (k,p-1)=2 and p≡ 34. We classify the possible cases in the following Theorem, which generalizes the result of Chowla corresponding to the case k=1. Let k∈, p be an odd prime and let K be a number field with K∩(ξ_p)=. Let V be the vector space over K consisting of odd p-periodic functions f:→ K and let V_0 be the subspace V_0:={f∈ V| L_k(1,f)=0}. Then, _K(V_0)≥ _K(V)r-1/r if v_2(p-1)> v_2(k),_K(V)r-2/r if v_2(p-1)≤ v_2(k), where r=(k,p-1) and v_2(a) denotes the 2-adic valuation of a. Moreover, the equality holds if (k,p-1)≤ 2 or if (k,p-1)=4 and p≡ 5 8. In particular,_K(V_0)=0 if and only if (k,p-1)=1 or if (k,p-1)=2 and p≡ 3 4. In the cases (k,p-1)=2 and p≡ 1 4 or (k,p-1)=4 and p≡ 5 8 we shall also show that L(1,f)≠0 whenever f≢0 has support entirely contained in the set of square residues mod p (or, analogously, of square non-residues). As a consequence of this and of Theorem <ref>we will deduce the following. Let k∈, p be an odd prime with either (k,p-1)≤ 2 or p≡ 5 8 and (k,p-1)=4.Then the set of values L(1,χ)^k are linearly independent overfor χ that runs through the odd Dirichlet characters mod p.Moreover, under Schanuel's conjecture the same result holds true also when χ varies among all non-principal Dirichlet characters mod p. It seems likely that the equality in (<ref>) (as well as a suitable modification of Theorem <ref>) holds true with no conditions on (k,p-1); in order to prove this one would need to show that certain explicit linear combinations of k-th powers of Dirichlet L-functions are non-zero.At first sight Theorem <ref> doesn't seem to say anything about the interesting case of the odd part of the Estermann function at s=1, D_sin(1,a/p):=∑_n≥ 1d(n)sin(2π na/p)/n, where (a,p)=1. Indeed, the number field generated by sin(2π/p),…,sin(2π (p-1)/2p)) has a non-trivial intersection with (ξ_p).However in fact one has (see <cit.> Hilfsatz 14 or<cit.> Theorem 4.4) D_sin1,a/p=-π∑_n≥1B(an/p)/n, withB(x)= { x }-1/2 for x∉ and B(x)=0 otherwise,where {x} denotes the fractional part of x. Thus, the non-vanishing ofD_sin(1,a/p)≠0 follows directly from Chowla's result (i.e. Theorem <ref> with k=1).The proof of Theorem <ref> is in fact a variation of Chowla's proof in <cit.>. In this proof he showed that the values (π/p),…,(π(p-1)/2p) are linearly independent over by proving that if g is a generator of (/p)^* then the determinant of the matrix ((π/p g^2(i+j)))_1≤ i,j≤ (p-1)/2 is a non-zero multiple of the relative class number h_p^-. One then obtains the result on the non-vanishing of L(1,f) from the fact that L(1,f) can be written as a linear combination in (π/p),…,(π(p-1)/2p).In our case, the analogue of the cotangent function is given by the sumsx_k(r;p):=1/p^k_m_1,…,m_kpm_1⋯m_k≡rpπm_1p⋯πm_kp,for p,k∈ and r∈, and whereindicates that the sum is restricted to coprime moduli mod p. Note that r↦ x_k(r,p) is odd. We notice that x_k(r;p)is reminiscent of several other arithmetic objects. For example in the case k=2 (and ignoring the difference in the normalizations) if we replace m_1m_2≡ r bym_1≡ r m_2, we obtain the Dedekind sum, whereas if we replace one of the cotangents (π x/p) by its discrete Fourier transform (i.e. essentially the fractional part {x/p}), then we obtain the Vasyunin sum (for which see e.g. <cit.>). The closest analogy, however, is with the hyper-Kloosterman sum K_k(r;p), which is obtained by replacing (π x/p) by x/p. Indeed, for k even both x_k(·; p) and K_k(·; p) take values in the real cyclotomic field (ξ_p)^+, where (ξ_n)^+:=(ξ_n+ξ_n^-1), and behave in the same way with respect to the action of the Galois group ((ξ_p)/) (for k odd x_k(r,p)∈(ξ_4p)^+).More precisely, and analogously to what happen for the Kloosterman sums, it is easy to see (c.f. Corollary <ref>) that if H is the subgroup of order (k,p-1) of ((ξ_p)/)∼ (/p)^*, then i^k x_k(r;p) is in (ξ_p)^H, the subfield fixed by H. If one could show that x_k(r;p)≠± x_k(ℓ;p) for all r≢±ℓ p, than one would obtain that each of the values x_k(r;p) for (r,p)=1 generates the aforementioned fixed fields.We refer to <cit.> for some results on the algebraic properties of K_k(·; p) related to this and to Theorem <ref> below.Let p be a prime, let k∈ and let K be a number field such that K∩(ξ_p)=. Then the values x_k(1;p),…, x_k((p-1)/2;p) are linearly independent over K if and only if (k,p-1)=1 or if (k,p-1)=2 and p≡ 3 4.If p≡ 1 4 and (k,p-1)=2 or if p≡ 5 8 and (k,p-1)=4, then the values of each of the sets S_±:={x_k(r;p)| r≤p-1/2, (r/p)=±1} are linearly independent over K, where(r/p) is the Legendre symbol.We also mention that, as all the other aforementioned sums, x_k(r;p) has some nice arithmetic features. For example, for p≡ 3 4, k=2 and (r,p)=1 one has _(ξ_p)^+/(x_k(r;p))=8 r/ph(-p)^2 p^- 1,where h(-p) is the class number of (√(-p)) (c.f. Corollary <ref>).We conclude this introduction by giving an alternative “analytic” expression of x_k(r;p), which is what will allow us to prove the above Theorems. Also, from this formula one can easily deduce the asymptotic for the moments of x_k(r;p).Let k∈, p be a prime and r∈ with (r,p)=1. Thenx_k(r;p)=1/22/π^k ∑_n∈n≡rpd_k(|n|)/n.In particular, if f:→ is odd and periodic modulo p, thenD_k(1,f)=2π/2^k∑_r=1^(p-1)/2f(r)x_k(r;p).Let p,k,m∈ with p be prime. Then for m≥2 even we have∑_rpx_k(r;p)^m= 2^k-1/π^k^m∑_n≥1d_k(n)^m/n^m +O_m,k,(p^-1+),whereas for m odd the left hand side is trivially 0. §.§.§ AcknowledgementThis works was started when both authors were visiting the Centre de Recherches Mathématiques in Montréal. Both authors want to thank this institution for providing excellent working conditions.Also, we thank Stéphane Louboutin for useful suggestions. The work of the first author is partially supported by PRIN “Number Theory and Arithmetic Geometry". The second author was a member of UMI 3457 supported by CNRS funding and is also supported by the ANR (grant ANR-14CE34-0009 MUDERA).§ THE SUM X_K(R;P) Let p≥3 be prime and let k∈. Then, i^k x_k(r;p)∈(ξ_p). More precisely, if k is even then x_k(r;p)∈(ξ_p)^+, whereas if k is odd thenx_k(r;p)∈(ξ_4p)^+.Moreover, for all (c,p)=1, let σ_c be the automorphism of (ξ_p) sending ξ_p↦ξ_p^c. Then, σ_c(i^k x_k(r;p))=i^k x_k(c^kr;p) for all r∈.By definition we havex_k(r;p):=i^k/p^k∑_ m_1⋯m_k≡rpm_1p+1/m_1p-1⋯ m_kp+1/ m_k p-1,and so x_k(r;p)∈ i^k(ξ_p). Now we have (ξ_p,i)=(ξ_4p) and so, since x_k(r;p)∈, we have x_k(r;p)∈(ξ_p)∩=(ξ_p)^+ for k even and x_k(r;p)∈(ξ_4p)^+ for k odd. Also,σ_c(i^kx_k(r;p)) =(-1)^k/p^k∑_ m_1⋯m_k≡rp cm_1p+1/cm_1p-1⋯c m_kp+1/c m_k p-1=i^k x_k(c^kr;p),by making the change of variables m_i→cm_i for each i=1,…,k.Let k,p∈ with p prime and let r∈. Then, i^k x_k(r;p)∈(ξ_p)^H, where H is the subgroup of order (k,p-1) of ((ξ_p)/). It's well known that ((ξ_p)/)∼(/p)^* is cyclic so H is well defined, the Corollary then follows immediately from Lemma <ref>. We now give a proof of Proposition <ref> and then compute the trace of x_k(r;p).For the sake of simplicity we shall ignore convergence issues when manipulating the order of summation of conditionally convergent series. One could make every step rigorous in several ways, for example by some analytic continuation arguments, or by using the “approximate” functional equations for the various series (in the form of exact formulae). We have∑_n∈n≡rpd_k(|n|)/n =1/φ(p)∑_χpχ(r)∑_n∈_≠0d_k(|n|)χ(n)/n =1/φ(p)∑_χpχ(r)(1-χ(-1))L(1,χ)^k.Now, if χ is a primitive odd character, then we haveL(1,χ) =π iτ(χ)/p_a pχ(a)a/p=π iτ(χ)/p_a pχ(a) Ba/p=π i/p_a,m pχ(a) Ba/pχ(m)m/p,where τ(χ)=∑_mpχ(m)m/p is the Gauss sum (see <cit.>, p. 36). Now, for (m,p)=1 we have_ap Bapmap=-i/2π mpand so, after making the change of variables m→ ma, we haveL(1,χ)=π/2p_mpχ(m)π mp.Noticing that the right hand side is zero if χ is even, we then have∑_n∈n≡rpd_k(|n|)/n=2/φ(p)π/2p^k∑_χpχ(r)_mpχ(m)π mp^k.The result then follows by expanding the power and exploiting the orthogonal relation for Dirichlet characters. Let k∈ and p≥3 be prime; let r∈ with (r,p)=1. Then if v_2(p-1)> v_2(k), then _( ξ_p)/(i^kx_k(r;p))=0. Otherwise, we have_(ξ_p)^+/(x_k(r;p))=1/42/π^k_χ pχ^k=1(1-χ(-1))χ(r)L(1,χ)^k . In particular, if (k,p-1)=2 and p≡ 3 4, then _(ξ_p)^+/(x_k(r;p))=2^2k-1r/ph(-p)^k p^-k/2.By Lemma <ref> and Proposition <ref>, given a generator g of (/p)^* we have_(ξ_p)/(i^k x_k(r;p))=∑_j=1^p-1σ_g^j(i^k x_k(r;p))=i^k/22/π^k ∑_jp-1∑_n∈n≡r g^kjpd_k(|n|)/n.Now, if v_2(p-1)> v_2(k), writing k=2^v_2(k)k' and making the change of variables j→p-1/2^v_2(k)+1+j we have that the condition on the sum over n becomes n≡ rg^kj+k'p-1/2≡ -rg^kj p. Thus, making the change n→ -n we obtain the opposite of the original sum and so _(ξ_p)/(i^kx_k(r;p))=0. Now assume v_2(p-1)≤ v_2(k), notice that in particular k is even. By (<ref>) we have∑_jp-1∑_n∈n≡rg^kjpd_k(|n|)/n=1/φ(p)∑_jp-1_χpχ(r)χ(g)^kj(1-χ(-1))L(1,χ)^k.Taking the sum over j inside, we obtain 0 unless χ(g)^k=1, i.e. if χ is a character of order dividing (k,p-1). Thus,_(ξ_p)^+/(x_k(r;p))=1/2_(ξ_p)/(x_k(r;p))=1/42/π^k_χpχ^k=1(1-χ(-1))χ(r)L(1,χ)^k.If (k,p-1)=2 and p≡ 3 4, then the only character contributing to the sum is the quadratic character ·/p and so_(ξ_p)^+/(x_k(r;p))=1/22/π^kr/pL(1,·/p)^k=2^2k-1r/ph(-p)^k p^- k/2, by the class number formula. We now give a proof of Corollary <ref>. We don't give many details as the proof is very similar (and actually a bit simpler than) to the proof of Theorem 1 of <cit.>.Let χ p be an odd character. Using the functional equation Λ(s,χ) := (p/π)^(s+1)/2 Γ(1+s2) L(s,χ)=τ(χ)/ip^1/2Λ(1-s,χ), and proceeding as in <cit.> we obtain that for every >0L(1,χ)^k =∑_n=1^∞d_k(n)χ(n)/ng(n/p^2k)+O_,k(p^-5k/2). where g(x) is a smooth function such that g(x)=1 for 0≤ x≤ 1, 0≤ g(x)≤ 1 for 1≤ x≤ 2, and g(x)=0 for x>2.Now, applying (<ref>), (<ref>) and (<ref>) and then going back to the congruence condition we have2π/2^k x_k(r;p)=∑_n∈∖{0}n≡rpd_k(|n|)/ng(|n|/p^2k)+O_,k(p^-5k/2).Thus, using the bound d_k(n) ≪_,k n^, we have ∑_rpπ^k x_k(r;p)/2^k-1^2m=∑_(n_1,…,n_2m)∈(∖{0})^2m n_i≡n_jp ∀1≤i< j≤2md̃_k(n_1,…,n_2m)/n_1⋯n_2m+O_k,m,(p^-5k/2+1+)where d̃_k(n_1,…, n_2m):=d_k(|n_1|)⋯ d_k(|n_2m|)g(|n_1|/p^2k)⋯ g(|n_2m|/p^2k). The contribution of the terms with n_a≠ n_b for some a≠b is trivially≪_k,m,∑_0<|n_a|,|n_b|<2p^2k n_a≡n_bpn_a≠n_b p^4mk/n_an_b≪_k,m, p^(4mk+2)-1.It follows that ∑_rpπ^k x_k(r;p)/2^k-1^2m=∑_n∈∖{0}d̃_k(n,…,n)/n^2m+O_k,m,(p^-1+).Since ∑_n≥ p^2k(d_k(n)/n)^2m≪_k, p^(1-(1-)2m)2k, we can remove the contribution of the functions g in d̃_k at a negligible cost and we obtain the claimed result. § THE VANISHING OF D_K(1,F) Let p≥3 be prime and let k∈_≥ 2. Let f:→ be periodic mod p. Then, D_k(s,f) is entire if and only if f(0)=0 and (f)=0. Also, in this case we haveD_k(s,f) =1/φ(p)_χpc_χ(f) L(s,χ)^k, where c_χ(f):=∑_r pf(r) χ(r). For (s)>1 we haveD_k(s,f) =∑_r pf(r)∑_n∈ n≡ r pd_k(n)/n^s=f(0)∑_n∈d_k(pn)/(pn)^s+_r pf(r)/φ(p)∑_χ pχ(r)L(s,χ)^k=f(0)∑_n∈ p|nd_k(n)/n^s+L(s,χ_0)^k_r pf(r)/φ(p) +1/φ(p)_χ pc_χ(f) L(s,χ)^k,where χ_0 is the principal character mod p. Now, ∑_n∈p|nd_k(n)/n^s=1-(1-p^-s)^kζ(s)^k,L(s,χ_0)=(1-p^-s)ζ(s)and soD_k(s,f) =f(0)+ (1-p^-s)^k_rpf(r)/φ(p)-f(0)ζ(s)^k +1/φ(p)_χpc_χ(f) L(s,χ)^k.In particular, D_k(s,f) is meromorphic onwith possibly a pole in s=1 only. Moreover, D_k(s,f) is entire if and only if P(x):=f(0)+(1-x)^k_rpf(r)/φ(p)-f(0)has a zero of order k at x=1/p. Thus, since k≥2 we have that P(x) has discriminant equal to zero. Now, the discriminant of P isΔ=k^k f(0)^k-1_r pf(r)/φ(p)-f(0)^k-1and so we must have that either f(0) or the term in the big brackets is zero. Then, imposing P has a zero at 1/p we find that both f(0) and the term in the brackets need to be zero, as desired. Equation (<ref>) then follows immediately from (<ref>). We now prove Proposition <ref>.For k=1, the result was proven in <cit.> so assume k≥2. Let f_odd(n):=(f(n)-f(-n))/2 and f_even(n):=(f(n)+f(-n))/2 so that D_k(1,f)=D_k(1,f_odd)+D_k(1,f_even). Then, one easily checks that c_χ(f_even)=0 for χ odd and c_χ(f_odd)=0 for χ even. Thus, since L(1,χ)∈π for χ odd (see formula (<ref>)), then by (<ref>) also D_k(1,f_odd)∈π^k. Now, by Schanuel's conjecture we have that π and the values of L(1,χ) for χ p even are algebraically independent over(this is stated in the paragraph after Corollary 2 in <cit.>, and essentially proved in Section 4 therein, without including π; however the same proof allows one to include π since log(-1)=π i when choosing the branch for the logarithm suitably). Thus we could have D_k(1,f_odd)=-D_k(1,f_even) only if c_χ(f_even)=0 for all χ even, i.e. if f_even=0 and so if f is odd. By Proposition <ref>, at least conditionally, in order to find functions f:→ such that D_k(1,f)=0 we need to take f odd. Then, for f odd with D_k(1,f)=0 by (<ref>) we have∑_r=1^(p-1)/2f(r)x_k(r;p)=1/2∑_j=0^p-2f(g^j)x_k(g^j;p)=0,where g is any generator of (/p)^* and where we used thatg^j+p-1/2=-g^j.If f(1),… f(p-1)∈ K with K∩(ξ_p)= (if k is even K∩(ξ_p)^+= would suffice), then we can extend the automorphism σ_c defined in Lemma <ref> to an automorphism of K(ξ_p) such that σ_c acts trivially on K (see <cit.> Corollary 6.5.2 p. 161).By a slight abuse of notation we still indicate the automorphism by σ_c. Then, multiplying (<ref>) by i^k and applying σ_g^ℓ we obtain new conditions for f:∑_j=0^p-2f(g^j)x_k(g^kℓ+j;p)=0for all ℓ∈. It is clearly sufficient to take 0≤ℓ<p-1/u where u=(k,p-1) and in fact, if v_2(p-1)>v_2(k) then we can take 0≤ℓ<p-1/2u since the following p-1/2u equations are just the negative of the first ones. Thus we have a system of linear equations in the values of f. We now study the determinants of the relevant matrices for such system.Let m≥1. For v=(v_0,…, v_m-1), let A_+(v):=[ v_0 v_1 … v_m-2 v_m-1; v_1 v_2 … v_m-1 v_0; ⋮ ⋮ ⋮ ⋮; v_m-1 v_0 … v_m-3 v_m-2; ]A_-(v):=[v_0v_1…v_m-2v_m-1;v_1v_2…v_m-1 -v_0;⋮⋮⋮⋮;v_m-1 -v_0… -v_m-3 -v_m-2;].Then, (A_+(v)) =(sin(πm2)- cos(πm2)) ∏_ℓ=0^m-1 ∑_j=0^m-1v_jξ_m^jℓ, (A_-(v)) =(sin(πm2)+ cos(πm2)) ∏_ℓ=0ℓ odd^2m∑_j=0^m-1v_jξ_2m^jℓ.Also, for j∈{0,…, m-1} and v = (v_0,v_1,…,v_m-1)∈^m, letv_j^±=v_j^±(v):=(v_j,…, v_m-1,±v_0,…, ±v_j-1) , and let v_j^-=-v_j-m^- for j∈{m,…, 2m-1}. Also, let u_j^± =v_j^±( u) with u=(1,0,…,0)∈^m. Then A_±( v_j^±)=C_±( u_j^±)A_±( v), where C_± is a matrix defined in the proof. One can easily check that A_±( v)A_±( u)=C_±( v), whereC_+(v):=[ v_0 v_m-1 … v_2 v_1; v_1 v_0 … v_3 v_2; ⋮ ⋮ ⋮ ⋮; v_m-1 v_m-2 … v_1 v_0; ]C_-(v):=[v_0 -v_m-1… -v_2 -v_1;v_1v_0… -v_3 -v_2;⋮⋮⋮⋮;v_m-1v_m-2…v_m-1v_0;].Similarly, one shows that the identity A_±( v_j^±)=C_±( u_j^±)A_±( v) holds. The eigenvectors of C_+( v) are c_ℓ^+=(1,ξ_m^(m-1)ℓ,ξ_m^(m-2)ℓ,…,ξ_m^ℓ)^T for 0≤ℓ<m where T indicates the transpose, whereas the eigenvectors of C_-( v) are c_r^-=(ξ_2m^mr,ξ_2m^(m-1)r,…,ξ_2m^r)^T with1≤ r <2m and r odd. The eigenvalues are given byC_+(v) c_ℓ^+=∑_j=0^m-1v_jξ_m^jℓc_ℓ^+,C_-(v) c_r^-=∑_j=0^m-1v_jξ_2m^jrc_r^-. The statement on the determinants then follows since (A_±( u))=sin(π m/2)∓cos(π m/2).Let p≥3 be prime. Let k∈ and assume v_2(p-1)> v_2(k). Write k=k'u with (k,p-1)=u and let p-1=uv (so that v is even). For (r,p)=1, let M_g,r be the matrix M_g,r:=(σ_g^i+j(x_k(r;p)))_0≤ i,j<v/2 where g is a generator of (/p)^*. Then(M_g,r)=(sin(πv4)+ cos(πv4))χ_*(r)^v^2/42^(k-1)v/2/π^kv/2 ∏_ℓ=0ℓ odd^v1/u∑_auχ_*(r)^vaL(1,χ_*^ℓ+va)^k,for a generator χ_*of the group of characters mod p. Moreover, for all j∈ we have σ_g^j(M_g,r)=C_-( u_j^-)M_g,r with C_- and u_j^- as in Lemma <ref>. Writing t=g^k' we have that t is also a primitive root mod p. Thus, by Lemma <ref>, we have M_g,r=(x_k(rt^u(i+j);p))_0≤ i,j<v/2 and sincex_k(rt^u(ℓ+v/2);p)=x_k(rt^uℓ+p-1/2;p)=x_k(-rt^uℓ;p)=-x_k(rt^uℓ;p) we have M_g,r=A_-( x), with x=(x_k(r;p),x_k(rt^u;p),…, x_k(rt^p-1-u;p)).Thus, by Lemma <ref> we have σ_g^j(M_g,r)=A_-( x_j)=C_-( u_j^-)A_-( x)=C_-( u_j^-)M_g,r. Also,(M_g,r)=(sin(πv4)+ cos(πv4)) ∏_ℓ=0ℓ odd^v ∑_j=0^v/2-1x_k(rt^ju;p)ξ_v^jℓ.Now,∑_j=0^v/2-1x_k(rt^ju;p)ξ_v^jℓ =∑_j=0^v/2-1x_k(rt^ju;p)ℓν_t(t^ju)/p-1=1/2∑_j=0^v-1x_k(rt^ju;p)ℓν_t(t^ju)/p-1, where ν_t(c) is the minimum non-negative integer such that t^ν_t(c)≡ c p. Then, writing η(c):=ν_t(c) /p-1 if (c,p)=1 and η(c):=0 otherwise, we have that η is a primitive odd character modulo p. Also, η generates the group of characters mod p. Then, we re-write the above as∑_j=0^v/2-1x_k(rt^ju;p)ξ_v^jℓ =1/2_cpu|ν(c)x_k(rc;p)η(c)^ℓ=1/2u∑_au_cpx_k(rc;p)η(c)^ℓaν(c)/u =1/2u∑_au_cpx_k(rc;p)η(c)^ℓ+va =η(r)^ℓ/2u∑_auη(r)^va_cpx_k(c;p)η(c)^ℓ+va. By Proposition <ref> with f = η^ℓ+va for ℓ odd this is =2/π^k η(r)^ℓ/2u∑_auη(r)^vaL(1,η^ℓ+va)^k.Finally, we have∏_ℓ=0ℓ odd^vη(r)^ℓ=η(r)^∑_ℓ=0^v(1-(-1)^ℓ)ℓ/2=η(r)^v^2/4and the result follows. With the same notation and conditions of Lemma <ref>, if (k,p-1)=1 we have(M_g,r)=(sin(π(p-1)4)+ cos(π(p-1)4))rp^(p-1)/22^k(p-2)-p-1/2(h_p^-)^k/p^k(p+3)/4,where h_p^- is the relative class number of thefield (ξ_p), that ish_p^-:= h_p/h_p^+ where h_p and h_p^+ are the class numbers of (ξ_p) and(ξ_p)^+ respectively. First we observe that for a generator χ_* of the group of characters we have χ^(p-1)/2=(· p). Then, by the proposition we have(M_g,r) =(sin(π(p-1)4)+ cos(π(p-1)4)) rp^(p-1)/2 2^(p-1)(k-1)/2/π^k/2(p-1)∏_ℓ=1ℓ odd^p-1L(1,χ_*^ℓ)^k. Now, referring to <cit.> (Chapters 3 and 4)for the basic results on cyclotomic fields, we have for s>1,∏_ℓ=1ℓ odd^p-1L(s,χ_*^ℓ)= ∏_χpoddL(s,χ) = ∏_χpL(s,χ)/∏_χpevenL(s,χ) = ζ_(ξ_p)(s)/ζ_(ξ_p)^+(s),where ζ_K(s) denotes the Dedekind the zeta-function corresponding to the field K.Then by the class number formula we have (c.f. <cit.> p. 41-42)∏_ℓ=1ℓ odd^p-1L(1,χ_*^ℓ)=π^p-1(h_p/h_p^+)^22^p-1/4p^2p^p-2/p^(p-3)/2^1/2.The Corollary then follows.With the same notation and conditions of Lemma <ref>, if (k,p-1)=2 and p≡ 1 4 we have(M_g,r)=(sin(π(p-1)8)+ cos(π(p-1)8))2^(k-2)(p-1)/4/π^k(p-1)/4 ∏_ℓ=0ℓ odd^(p-1)/2L(1,χ_*^ℓ)^k+(r p)L(1,(·p)χ_*^ℓ)^k. Let p≥3 be prime. Let k∈ and assume v_2(p-1)≤v_2(k). Write k=k'u with (k,p-1)=u and let p-1=uv. For (r,p)=1, let M_g,r' be the matrix M_g,r':=(σ_g^i+j(x_k(r;p)))_0≤ i,j<v where g is a generator of (/p)^*. Then(M_g,r')=(-1)^(v-1)/2χ_*(r)^v^22/π^kv∏_ℓ=0ℓ odd^2v1/u∑_au/2χ_*(r)^2vaL(1,χ_*^ℓ+2va)^k,for a generator χ_* of the group of characters mod p. Moreover, for all j∈ we have σ_g^j(M_g,r')=C_+( u_j^+)M_g,r' with C_+ and u_j^+ as in Lemma <ref>.We proceed as in the proof of Lemma <ref> setting t=g^k' and M_g,r'=(x_k(rt^u(i+j);p))_0≤ i,j<v. Then, since x_k(rt^u(ℓ+v);p)=x_k(rt^uℓ;p) we have M_g,r'=A_+( x), with x=(x_k(r;p),x_k(rt^u;p),…, x_k(rt^p-1-u;p)).Thus, by Lemma <ref> we have σ_g^j(M_g,r')=C_+( u_j^+)M_g,r' and(M_g,r)=(sin(πv2)- cos(πv2)) ∏_ℓ=0^v-1 ∑_j=0^v-1x_k(rt^ju;p)ξ_v^jℓ.Now, since v is odd then sin(π v2)- cos(π v2)=(-1)^(v-1)/2. Also, as in the proof of Lemma <ref> we have∑_j=0^v-1x_k(rt^ju;p)ξ_v^jℓ=η(r)^ℓ/u∑_auη(r)^va _ cp x_k(c;p)η(c)^ℓ+va.By symmetry the innermost sum is zero if ℓ+va is even and otherwise it is 2π^kL(1,η^ℓ+va)^kby Proposition <ref>.Thus, ∑_j=0^v-1x_k(rt^ju;p)ξ_v^jℓ=2/π^kη(r)^ℓ+δv/u∑_au/2η(r)^2vaL(1,η^ℓ+δv+2va)^k,where δ=1 if 2|ℓ and δ=0 otherwise, and the lemma follows. With the same notation and conditions of Proposition <ref>, if (k,p-1)=2 and p≡ 3 4 we have(M_g,r')= (-1)^(p-3)/4rp 2^k(p-2)-p-1/2(h_p^-)^k/p^k(p+3)/4.One proceeds as for Corollary <ref>. With the same notation and conditions of Proposition <ref>, if (k,p-1)=4 and p≡ 5 8 we have(M_g,r')=(-1)^(p-5)/8χ_*(r)^(p-1/4)^22^(k-1)(p-1)/4/π^k(p-1)/4 ∏_ℓ=0ℓ odd^(p-1)/2L(1,χ_*^ℓ)^k+(rp)L(1,(·p)χ_*^ℓ)^k. First we show that the equality in (<ref>) and Theorem <ref> hold if (k,p-1)≤ 2 or if (k,p-1)=4 and p≡ 5 8.Let us begin with the case (k,p-1)=1 and assume D_k(1,f)=0 with f odd.As explained above, if K∩(ξ_p)=, then we have the system of equations (<ref>). Also, if (k,p-1)=1 then t:=g^k is also a primitive root and so after a change of variable we can rewrite (<ref>) as 1/2∑_j=0^p-2f(t^j)x_k(t^ℓ+j;p)=∑_j=0^(p-3)/2f(t^j)x_k(t^ℓ+j;p)=0.for 0≤ℓ≤p-3/2. Equivalently, M_g,1 f=0 where f=(f(t^0),…, f(t^p-3/2))^T. Thus, since by Corollary <ref> we have (M_g,1)≠0, then the only solution to this system is f= 0, i.e. f is identically zero. Thus we have proven that the equality holds in (<ref>) in this case. Now, we prove Theorem <ref> for the case (k,p-1)=1. Actually in this case we prove more generally that given a number field K such that K(ξ_p-1)∩(ξ_p)= then the values of L(1,χ)^k when χ runs over odd Dirichlet characters mod p are linearly independent overK. Assume ∑^*_χa_χL(1,χ)^k=0, with a_χ∈ K and a_χ=0 if χ even. Then, writing f:=∑^*_χa_χχ we have D_k(1,f)=0. Notice that f is odd and takes values in the field K(ξ_p-1). Thus, by Theorem <ref> in the case (k,p-1)=1 we have f≡ 0 and so a_χ for all χ, as desired. By a similar argument as above, and by Proposition 1, wecan see that the statement in Theorem <ref>under Schanuel's conjecture follows from the unconditional case.Next we prove the equality in (<ref>) and Theorem <ref> when (k,p-1)=2, p≡ 3 4. Let k=2k' so that t= g^k' is a generator of (/p)^*. Since -1 is a quadratic non-residue mod p, (± t^2j)_0≤ j < p-1/2spans all residues of (/p)^*. Thus we can rewrite the system (<ref>) as ∑_j=0^p-3/2 f(t^2j) x_k( t^2(j+ℓ);p) =0 for 0≤ℓ≤p-3/2.Then we conclude as above, the only difference is that in this case the system is M_g,1' f'=0with f'=(f(t^0), f(t^2),…, f(t^p-3))^T so that we apply Corollary <ref>.Now, assume (k,p-1)=2 and p≡ 1 4 and write k=2k' and t=g^k'. Then, after a change of variables (<ref>) gives∑_j=0^(p-3)/2f(t^j)x_k(t^2ℓ+j;p)=0 0≤ℓ <p-1/4. Equivalently, M_g,1 f_1+M_g,t f_t= 0, where f_r =(f(rt^0),f(rt^2),…, f(rt^(p-3)/2)) for r∈{1,t}. By Corollary <ref>, we have (M_g,1) =κ∏_ℓ=0ℓ odd^(p-1)/2L(1,χ_*^ℓ)^k+(1 p)L(1,(·p)χ_*^ℓ)^k =κ∏_ℓ=0ℓ odd^(p-1)/2L(1,χ_*^ℓ)^k'+iL(1,(·p)χ_*^ℓ)^k'L(1,χ_*^ℓ)^k'-i L(1,(·p)χ_*^ℓ)^k'for some κ≠0, χ_* a generator of the group of characters mod p. Now, for ℓ oddand p≡ 1 4 we have that both χ_*^ℓ and (· p)χ_*^ℓ are odd characters. Also, we have (ξ_p-1)∩(ξ_p)=and (k',p-1)=1. Thus, by the case (k,p-1)=1 of Theorem <ref> proven above we have that L(1,χ_*^ℓ)^k' and L(1,(· p)χ_*^ℓ)^k' are linearly independent over (ξ_p-1) for all ℓ odd. Thus, since i∈(ξ_p-1), we have (M_g,1)≠0 and so the above system can be written as f_1+M_g,1^-1M_g,t f_t= 0. We also observe that by Corollary <ref> for all j, σ_g^j(M_g,1^-1M_g,t)=M_g,1^-1C_-( u_j^-)^-1C_-( u_j^-)M_g,t=M_g,1^-1M_g,t. In particular, M_g,1^-1M_g,t has entries in K. Thus, the system f_1+M_g,1^-1M_g,t f_t=0 is defined over K, has rank p-1/4 and has p-1/2 free variables, whence the equality in (<ref>) holds in this case.We also notice that proceeding as above we also obtain (M_g,t)≠0. In particular, we have that if f_1=0 or if f_t=0, then M_g,1 f_1+M_g,t f_t= 0 has no non-trivial solutions. Equivalently, there is no solution to D(1,f)=0 with f odd and supported only on either square residues or on square non-residues.Now, we prove Theorem <ref> in the case p≡ 1 4, (k,p-1)=2. As above, it suffices to consider the odd characters case. We assume _χa_χL(1,χ)^k=0 with a_χ=0 if χ iseven anda_χ∈ if χ odd. By (<ref>) we have _χa_χ_mpχ(m)π mp^k=0and since wehave (ξ_p)∩(ξ_p-1)=, then there exists an automorphism σ of (ξ_p,ξ_p-1) which leaves (ξ_p) invariant and send ξ_p-1↦ξ_p-1^1+p-1/2. Notice that (1+p-1/2,p-1)=1 and so this automorphism is well defined. Also since every odd character χ can be written as χ(m)=χ_0(m)jν_g(m)/p-1 for some j odd and where g is a generator of (/p)^*, then σ(χ(m))=χ(m)jν_g(m)/2=χ(m)(m/p). Thus, applying σ to (<ref>) we obtain_χa_χ_mp(mp)χ(m)π mp^k=0or, equivalently, _χa_χL(1,(·/p)χ)^k=0. Summing this equation with the one we originally had, we obtain_χa_χL(1,χ)^k±L(1,(·p)χ)^k=0or, equivalently D_k(1,f_±)=0, where f_±=∑^*_χa_χ(1±(·/p))χ. Notice that f_+ and f_- are odd and take values in (ξ_p-1). Also, f_+ is supported only on square residues and f_- only on square non-residues. Thus, by what proven above we have that f_+ and f_-≡ 0 are identically zero and thus so is f≡ 0, as desired.The proof of the equality in (<ref>) and Theorem <ref> in the case p≡ 5 8, (k,p-1)=4 is analogous to the case p≡ 34, (k,p-1)=2. Let k =4k' and t=g^k'. The system (<ref>)is equivalent to ∑_0≤j < (p-1)/2 f(t^2j)x_k(t^2j+4ℓ;p) + ∑_0≤j < (p-1)/2 f(t^2j+1)x_k(t^2j+1+4ℓ;p)=0 for 0≤ℓ < p-1/4. Since -1≡t^(p-1)/2 p and p-1/2≡ 24, the span of t^2j when j runs over {0,…, p-5/4} is the same as ± t^4j. Thus this system is equivalent toM_g,1' f_1+M_g,t' f_t= 0with f_r = (f(rt^0),f(rt^4),…,f(rt^p-3))^T and we use Corollary <ref> to compute the determinant of the two matrices. For r∈{1,t} we obtain (M_g,r')=κ∏_ℓ=0ℓ odd^(p-1)/2L(1,χ_*^ℓ)^k”+i^(r)L(1,(· p)χ_*^ℓ)^k”L(1,χ_*^ℓ)^k”-i^(r) L(1,(· p)χ_*^ℓ)^k”where (t)=0, (1)=1,κ∈_≠0 and k”=k/2 (so that (k”,p-1)=2). We proved above the linear independence over of the L(1,χ)^k” with χ odd which implies(M_g,t')≠0. Moreover we have L(1,χ_*^ℓ)^k”±iL(1,(·p)χ_*^ℓ)^k” =L(1,χ_*^ℓ)^k'- ξ_8^4∓1L(1,(·p)χ_*^ℓ)^k'L(1,χ_*^ℓ)^k'+ ξ_8^4∓1L(1,(·p)χ_*^ℓ)^k'.We may use Theorem <ref> with (k,p-1)=1 in the more general case proven above with K = (ξ_8) since it satisfies K(ξ_p-1)∩(ξ_p)=.It follows that we also have (M_g,1')≠0 andproceeding as above we obtain the equality in (<ref>) and Theorem <ref>. Finally, it remains to prove that the inequality (<ref>) always holds. We consider the case v_2(p-1)> v_2(k) only, the other case being analogous. We write k=k'u with (k,p-1)=u and t=g^k', obtaining the systemM_g,1 f_1+M_g,t f_t+⋯ +M_g,t^u-1 f_t^u-1=0with f_r=(f(r t^0),f(r t^u),…, f(r t^p-1-u))^T. This is a system of p-1/2u equations in p-1/2 variables. Also, the automorphisms σ_t^j do not change the system since their effect is just that of multiplying all the above matrices by C_-( u_j^-) on the left. Thus, the system admits a base of solutions in Kand (<ref>) follows.Corollaries <ref> and <ref> give the non-vanishing of the determinant there computed and in the proof of Theorem <ref> and Theorem <ref> we also showed that also the determinants computed in Corollaries <ref> and <ref> are non-zero. Proposition <ref> then follows with the same argument as above.alpha
http://arxiv.org/abs/1704.08358v1
{ "authors": [ "Sandro Bettin", "Bruno Martin" ], "categories": [ "math.NT", "11M41, 11L03, 11M20 (primary), 11R18 (secondary)" ], "primary_category": "math.NT", "published": "20170426213558", "title": "On the non-vanishing of certain Dirichlet series" }
Department of Management Science, Lancaster University, Lancaster, United KingdomAn Experimental Comparison of Uncertainty Sets for Robust Shortest Path Problems Marc Goerigk================================================================================ Through the development of efficient algorithms, data structures and preprocessing techniques,real-world shortest path problems in street networks are now very fast to solve. But in reality, the exact travel times along each arc in the network may not be known. This lead to the development of robust shortest path problems, where all possible arc travel times are contained in a so-called uncertainty set of possible outcomes.Research in robust shortest path problems typically assumes this set to be given, and provides complexity results as well as algorithms depending on its shape. However, what can actually be observed in real-world problems are only discrete raw data points. The shape of the uncertainty is already a modelling assumption. In this paper we test several of the most widely used assumptions on the uncertainty set using real-world traffic measurements provided by the City of Chicago. We calculate the resulting different robust solutions, and evaluate which uncertainty approach is actually reasonable for our data. This anchors theoretical research in a real-world application and allows us to point out which robust models should be the future focus of algorithmic development. Keywords:robust shortest paths, uncertainty sets, real-world data, experimental study § INTRODUCTION The problem of finding shortest paths in real-world networks has seen considerable algorithmic improvements over the last decade <cit.>. In the typical problem setup, one assumes that all data is given exactly. But also robust shortest path problems have been considered, where travel times are assumed to be given by a set of possible scenarios. In <cit.>, it was shown that the problem of finding a path that minimizes the worst-case length over two scenarios is already weakly NP-hard. For general surveys on results in robust discrete optimization, we refer to <cit.>.There are many possibilities how to model the scenario set that is used for the robust optimization process (see, e.g., <cit.>), and it is not obvious which is ”the right” one. Part of the current research ignores the problem by simply assuming that the uncertainty was ”given” in some specific form, while this does not happen in reality.In fact, the starting point for all uncertainty sets is raw data, given as a set of observations of travel times. This data is then processed to fit different assumptions on the shape and size of the uncertainty set, and preferences of the decision maker. So far, the discussion of these uncertainty sets has been led by theoretical properties, such as the computational tractability of the resulting robust model. We believe that this leads to a gap in the literature, where models are not sufficiently underpinned by actual real-world data to verify results. The purpose of this paper is to close this gap. We use real-world traffic observations by the City of Chicago to create a selection of the best-known and most-used uncertainty sets from the research literature. Using these uncertainty sets, we calculate different robust solutions and compare their performance. This allows us to determine which uncertainty sets are actually valuable for real-world robust shortest path problems. Our results give strong impulses for future research in the field by pointing out which problems are most worthy to solve more efficiently.In Section <ref> we briefly introduce all six uncertainty sets used in this study, and discuss the complexity of the resulting robust problems. The experimental setup and results are then presented in Section <ref>, before concluding this paper in Section <ref>. § UNCERTAINTY SETS FOR THE SHORTEST PATH PROBLEMLet a directed graph G=(V,A) with nodes V and arcs A be given. In the classic shortest path problem, each arc e has some specific travel time c_e ≥ 0. Given a start node s and a target node t, the aim is to find a path minimizing the total travel time, i.e., to solvemin{cx : x∈}where ⊆{0,1}^n denotes the set of s-t-paths, and n = |A|. For our setting we assume instead that a setof travel time observations is given, = {c^1,…,c^N} with c^i∈^n. This is the available raw data. In the well-known robust shortest path problem we assume that an uncertainty setis produced based on this raw data, and solve the robustified problemmin_x∈max_c∈cx,that is, we search for a path that minimizes the worst-case costs over all scenarios. In the following sections we detail different possibilities from the current literature to generatebased on . Each set is equipped with a scaling parameter to control its size.§.§ Convex Hull In this approach, also known as discrete uncertainty (see <cit.>), we set ^CH =. The resulting robust problem can then be written asmin zs.t. z ≥c^ix ∀ i∈[N] x∈Note that this problem is equivalent to using ^CH = conv ({c^1,…,c^N}). The problem is known to be NP-hard already for two scenarios. Scaling:Let ĉ be the average of {c^1,…,c^N}, i.e., ĉ = 1/N∑_i∈[N]c^i. For a given λ≥ 0, we substitute each point c^i with ĉ + λ(c^i-ĉ), and take the convex hull of the scaled data points. §.§ Intervals We set ^I as the smallest hypercube containing all data, i.e., ^I = ∏_i∈[n] [min_j∈[N] c^j_i,max_j∈[N] c^j_i]. For ease of notation, we write c_i := max_j∈[N] c^j_i and c_i := min_j∈[N] c^j_i. The resulting robust problem is then mincx s.t.x∈which is a classic shortest path problem. Robust shortest path problems with interval uncertainty are therefore easy to solve, but frequently used, especially in the so-called min-max regret setting (see <cit.>).Scaling: We use ^I = ∏_i∈[n][c_i + c_i/2 - λc_i - c_i/2, c_i + c_i/2 + λc_i - c_i/2] for some λ≥0.§.§ Ellipsoid Ellipsoidal uncertainty sets were first proposed in <cit.> and stem from the observation that the iso-density locus of a multivariate normal distribution is an ellipse. We use an ellipsoid of the form ^E = {c : (c - μ)^tΣ^-1(c - μ) ≤λ} with size parameter λ≥ 0 that is centered on ĉ.We create it using a normal distribution found as a maximum-likelihood fit. Recall that the best fit of a multivariate normal distribution 𝒩(μ,Σ) with respect to data points c^1,…,c^N is given byμ = ĉ = 1/N(c^1 + … + c^N)andΣ = 1/N∑_i∈[N] (c^i - μ)(c^i - μ)^tThe resulting problem can then be formulated asminĉx + zs.t. z^2 ≥λ( x^t Σx) x∈which is an integer second-order cone program (ISOCP), see <cit.> for details. Due to the convexity of the constraints, the problem can still be solved with little computational effort by standard solvers. §.§ Budgeted Uncertainty This approach was introduced in <cit.>, and is based oninterval uncertainty = ∏_i∈[n][ ĉ_i, c_i ]. To reduce the conservatism of this approach one assumes that only at most Γ∈{0,…,n} many values can be simultaneously higher than the midpoint ĉ. Formally,^B = {c : c_i = ĉ_i + (c_i - ĉ_i)δ_ifor alli∈[n], 0≤δ≤1, ∑_i∈[n]δ_i ≤Γ}Using the dual of the inner worst-case problem, the following compact mixed-integer program can be found:minĉx + Γπ + ∑_i∈[n]ρ_is.t.π + ρ_i ≥ (c_i - ĉ_i)x_i∀ i∈[n] π, ρ≥ 0 x∈This approach has the advantage that probability bounds can be found that compare favorably with those for ellipsoidal uncertainty <cit.>, while this problem also remains polynomially solvable by enumerating possible values for the π variable. This means that 𝒪(n) many problems of the original type need to be solved. For these reasons, the budgeted uncertainty approach has been very popular in the literature. §.§ Permutohull The final two uncertainty sets we consider were proposed in <cit.>. The original inspiration comes from risk measures; the authors show that any so-called distortion risk measure leads to a polyhedral uncertainty set. A risk measure μ is a distortion risk measure if and only if there exists q∈{q'∈Δ^N : q_1 ≥… q_N}, where Δ^N denotes the N-dimensional simplex such thatμ(x) = - ∑_i∈[N] q_i (c^(i)x)where the sorting (i) is chosen such that c^(1)x≥…≥c^(N)x.The conditional value at risk CVaR_α with α∈(0,1] is a well-known distortion risk measure. Intuitively, it is the expected value amongst the α worst outcomes. Using the matrixQ_N := [ 1 … 1/N-2 1/N-1 1/N; ⋮ ⋮ ⋮ ⋮ ⋮; 0 0 1/N-2 1/N-1 1/N; 0 … 0 1/N-1 1/N; 0 … 0 0 1/N ]the jth column of Q_N induces the risk measure CVaR_j/N. The corresponding polyhedra are called the q-permutohull and defined asΠ_q(c^1,…,c^N) := conv( {∑_i∈[N] q_σ(i)c^i : σ∈ S_N}) To find the resulting robust problem, we first consider the worst-case problem for fixed x∈.max∑_i,j∈[N] q_i (c^jx) p_ij s.t.∑_i∈[N] p_ij = 1∀ j∈[N] ∑_j∈[N] p_ij = 1∀ i∈[N]p_ij≥ 0∀ i,j∈[N]Dualising this problem then gives the robust counterpartmin∑_i∈[n] (v_i + w_i)s.t. v_i + w_j ≥ q_i (c^jx)∀ i,j∈[N] v,w≷ 0 x∈which is a mixed-integer program (note that this approach is actually the same as the ordered weighted averaging method, see <cit.>). The problem is NP-hard, as it contains the convex hull of {c^1,…,c^N} as a special case.Through the choice of q, there are N possible sizes of this uncertainty. §.§ Symmetric Permutohull In the same setting as before, the symmetric permutohull was also introduced in <cit.>. By using the ⌊ N/2 ⌋ +1 columns of the matrix Q̃ := 1/N[ 1 2 2 … 2; 1 1 2 … 2; 1 1 1 … 2; ⋮ ⋮ ⋮ ⋮ ⋮; 1 1 1 … 0; 1 1 0 … 0; 1 0 0 … 0; ]it was shown that the resulting polyhedra are symmetric with respect to ĉ. Note that these problems are also NP-hard, as Q̃ contains the min-max approach for N=2 as a special case.§.§ Summary of Uncertainty SetsIn total we described six methods to generate uncertainty setbased on the raw data . Figure <ref> illustrates these sets using a raw dataset with four observations (shown as red points). The complexity to solve the resulting robust models, as well as the type of program with the numbers of additional variables and constraints compared to the classic shortest path problem are shown in Table <ref>.While the robust model with budgeted uncertainty sets can be solved in polynomial time using combinatorial algorithms, we still used the MIP formulation for our experiments, as it was sufficiently fast.§ REAL-WORLD EXPERIMENTS§.§ Data Collection and Cleaning We used data provided by the City of Chicago[https://data.cityofchicago.org], which provides a live traffic data interface. We recorded traffic updates in a 15-minute interval over a time horizon of 24 hours spanning Monday March 27th 2017 morning to Tuesday March 28th 2017 morning. A total of 98 data observations were thus used.Every observation contains the traffic speed for a subset of a total of 1,257 segments. For each segment the geographical position is available, see the resulting plot in Figure <ref> with a zoom-in for the city center. The complete travel speed data set contains a total of 54,295 observations. There were 1,027 segments where the data was recorded at least once of the 96 time points. Nearly for 88% of the segments, speeds were recorded for at least 50 records with only 1% (10 segments) where only one observation was recorded. We used linear interpolation to fill the missing records keeping in mind that data was collected over time. The data after removing missing records and filling missing values can be found at <www.lancaster.ac.uk/ goerigk/robust-sp-data.zip>. As segments are purely geographical objects without structure, we needed to create a graph for our experiments. To this end, segments were split when they crossed or nearly crossed, and start- and end-points that were sufficiently close to each other were identified as the same node. The resulting graph is shown in Figure <ref>; note that this process slightly simplified the network, but kept its structure intact. The final graph contains 538 nodes and 1308 arcs. §.§ Setup Each uncertainty set is equipped with a size parameter. For each parameter we generated 20 possible values: * For ^CH and ^I, λ∈{ 0.1, 0.2, …, 2}.* For ^E, λ∈{ 0.2, 0.4, …, 4}.* For ^B, Γ∈{ 5, 10, …, 100}.* For ^PH, we used columns q_1, q_3, …, q_39.* For ^SPH, we used columns q_1, q_2, …, q_20.Each uncertainty set is generated using only every second scenario (i.e., 48 out of 96), but all 96 scenarios are then used to evaluate the solutions. Furthermore, we generated 200 random s-t pairs uniformly, and used each of the 6· 20 methods on the same 200 pairs. Each of our 120 methods hence generates 200· 96 = 19,200 objective values.It is highly non-trivial to assess the quality of these solutions, see <cit.>. If one just uses the average objective value, as an example, then one could as well calculate the solution optimizing the average scenario case to find the best performance with respect to this measure. To find a balanced evaluation of all methods, we used four performance criteria: * the average objective value,* the average of the worst-case objective value for each s-t pair, and* the average value of the worst 5% of objective values for each s-t pair (as in the CVaR measure)We also considered the average rank. To this end, we rank all 120 methods for each specific combination of s-t pair and scenario. The best performing methods are ranked at 1, the second-best at 2 etc. We then take the average rank over all 19,200 observations. However, this measure was strongly correlated with the average objective value and is therefore not presented.For all experiments we used a computer with a 16-core Intel Xeon E5-2670 processor, running at 2.60 GHz with 20MB cache, and Ubuntu 12.04. Processes were pinned to one core. We used CPLEX v.12.6 to solve all problem formulations. §.§ Results We present our findings in the two plots of Figure <ref>. In each plot, the 20 parameter settings that belong to the same uncertainty set are connected by a line. They are complemented with Figure <ref> showing the total computation times for the methods over all 200 shortest path calculations. The first plot in Figure <ref> shows the trade-off between the average and the maximum objective value; the second plot in Figure <ref> shows the trade-off between the average and the average of the 5% worst objective values. Note that for all performance measures, smaller values indicate a better performance – hence, good trade-off solutions should move from the top left to the bottom right of the plots. In general, the points corresponding to the parameter settings that give weight to the average performance can be on the left sides of the curves, while the more robust parameter settings are on the right sides, as would be expected.We first discus Figure <ref>. In general, we find that most concepts to indeed present a trade-off between average performance and robustness through their scaling parameter. The symmetric permutohull solutions have the best average performance, while interval solutions are the most robust. Interestingly, that even holds for interval solutions where the scaling parameter is very small. The budgeted uncertainty does not give a good trade-off between worst-case and average-case performance, which confirms previous results on artificial data <cit.>. Scaling interval uncertainty sets achieves better results than using budgeted uncertainty. Solutions generated with ellipsoidal uncertainty sets slightly outperform (dominate in the Pareto sense) solutions generated with permutohull uncertainty. We also note that methods that are computationally more expensive tend to achieve better average performance at the cost of decreases robustness. The simplest and cheapest method, interval uncertainty, gives the most robust solutions. Solutions using the convex hull of raw data tend to be outperformed by the approaches that process data.We now consider the results presented in Figure <ref>. Here the average is plotted against the average performance of the 5% worst performing scenarios, averaged over all s-t pairs. We note that for interval uncertainty, these two criteria are connected, with the best solutions for small parameter size dominating all solutions for larger parameter size. For the permutohull and the ellipsoidal uncertainty solutions, the order slightly changed with the former often dominating the latter. Permutohull solutions are designed to be efficient for the CVaR criterion, and the best-performing solution with respect to this aspect is indeed generated by this approach. However, also solutions with ellipsoidal, interval and convex hull uncertainty perform well.Regarding computation times (see Figure <ref>), note that the two polynomially solvable approaches are also the fastest when using Cplex; these computation times can be further improved using specialized algorithms. Using the convex hull is faster than using ellipsoids, which are in turn faster than using the symmetric permutohull. For the standard permutohull, the computation times are sensitive to the uncertainty size; if the q vector that is used in the model has only few entries, then computation times are smaller. This is in line with the intuition that the problem becomes easier if less scenarios need to be considered.To summarize our findings in our experiment on the robust shortest path problem with real-world data: * Convex hull solutions are amongst the more robust solutions, but tend to be outperformed by the other approaches.* Interval solutions perform bad on average, but are the most robust. Especially when the scaling is small they can give a decent trade-off, and are easy and fast to compute.* Ellipsoidal uncertainty solutions have very good overall performance and represent a large part of the non-dominated points in our results.* We do not encourage the use of budgeted uncertainty for robust shortest path problems. Scaling interval uncertainty sets gives better results and is easier to use and to solve.* Permutohull solutions offer good trade-off solutions, whereas symmetric permutohull solutions tend to be less robust, but provide an excellent average performance. These methods are also require most computational effort to find.In the light of these findings, the interval and discrete (=convex hull) uncertainty sets that are widely used in robust combinatorial optimization do warrant research attention, as they may not produce the best solutions, but are relatively fast to solve. However, permutohull and ellipsoidal uncertainty tend to produce solutions with a better trade-off, while being computationally more challenging. The algorithmic research for robust shortest path problems with such structure should therefore become a future focus.§ CONCLUSIONIn this paper wo constructed uncertainty sets for the robust shortest path problem using real-world traffic observations for the City of Chicago. We evaluated the model suitability of these sets by finding the resulting robust paths, and comparing their performance using different performance indicators.Naturally, conclusions can only be drawn within the reach of the available data. In our setting we considered solutions that are robust with respect to all possible travel times within a day. A use-case would be that a path needs to be computed for a specific day, but the precise hour is not known. Using different sets of observations will result in solutions that are different in another sense, e.g., one could use observations over different days during the morning rush hours, or observations that span work days and a weekend. It is possible that these sets will provide different structure. Finally, we have observed that using ellipsoidal uncertainty sets provides high-quality solutions with less computational effort than for the permutohull. If one uses only the diagonal entries of the matrix Σ, then one ignores the data correlation in the network. For the resulting problem specialized algorithms exist, see, e.g. <cit.>. In additional experiments we found that even by using Cplex, computation times were considerably reduced when only using the diagonal entries of Σ, but the solution quality remained roughly the same.BDG+16[ABV09]aissi2009min Hassene Aissi, Cristina Bazgan, and Daniel Vanderpooten. Min–max and min–max regret versions of combinatorial optimization problems: A survey. European journal of operational research, 197(2):427–438, 2009.[BB09]bertsimas2009constructing Dimitris Bertsimas and David B Brown. Constructing uncertainty sets for robust linear optimization. Operations research, 57(6):1483–1495, 2009.[BDG+16]bast2016route Hannah Bast, Daniel Delling, Andrew Goldberg, Matthias Müller-Hannemann, Thomas Pajor, Peter Sanders, Dorothea Wagner, and Renato F Werneck. Route planning in transportation networks. In Algorithm Engineering, pages 19–80. Springer, 2016.[BS03]bertsimas2003robust Dimitris Bertsimas and Melvyn Sim. Robust discrete optimization and network flows. Mathematical programming, 98(1):49–71, 2003.[BS04]bertsimas2004price Dimitris Bertsimas and Melvyn Sim. The price of robustness. Operations research, 52(1):35–53, 2004.[BTN98]ben1998robust Aharon Ben-Tal and Arkadi Nemirovski. Robust convex optimization. Mathematics of operations research, 23(4):769–805, 1998.[BTN99]ben1999robust Aharon Ben-Tal and Arkadi Nemirovski. Robust solutions of uncertain linear programs. Operations research letters, 25(1):1–13, 1999.[Büs12]busing2012recoverable Christina Büsing. Recoverable robust shortest path problems. Networks, 59(1):181–189, 2012.[CG15a]chassein2015alternative André Chassein and Marc Goerigk. Alternative formulations for the ordered weighted averaging objective. Information Processing Letters, 115(6):604–608, 2015.[CG15b]chassein2015new André Chassein and Marc Goerigk. A new bound for the midpoint solution in minmax regret optimization with an application to the robust shortest path problem. European Journal of Operational Research, 244(3):739–747, 2015.[CG16a]chassein2016bicriteria André Chassein and Marc Goerigk. A bicriteria approach to robust optimization. Computers & Operations Research, 66:181–189, 2016.[CG16b]chassein2016performance André Chassein and Marc Goerigk. Performance analysis in robust optimization. In Robustness Analysis in Decision Aiding, Optimization, and Analytics, pages 145–170. Springer, 2016.[KZ16]kasperski2016robust Adam Kasperski and Paweł Zieliński. Robust discrete optimization under discrete and interval uncertainty: A survey. In Robustness Analysis in Decision Aiding, Optimization, and Analytics, pages 113–143. Springer, 2016.[MG04]montemanni2004exact Roberto Montemanni and Luca Maria Gambardella. An exact algorithm for the robust shortest path problem with interval data. Computers & Operations Research, 31(10):1667–1680, 2004.[Nik09]nikolova2009high Evdokia Nikolova. High-performance heuristics for optimization in stochastic traffic engineering problems. In International Conference on Large-Scale Scientific Computing, pages 352–360. Springer, 2009.[YY98]yu1998robust Gang Yu and Jian Yang. On the robust shortest path problem. Computers & Operations Research, 25(6):457–468, 1998.
http://arxiv.org/abs/1704.08470v1
{ "authors": [ "Trivikram Dokka", "Marc Goerigk" ], "categories": [ "math.OC", "cs.DM", "cs.DS" ], "primary_category": "math.OC", "published": "20170427081511", "title": "An Experimental Comparison of Uncertainty Sets for Robust Shortest Path Problems" }
Observatoire astronomique de Strasbourg, Université de Strasbourg, CNRS, UMR 7550, 11 rue de l'Université, F-67000 Strasbourg, FranceGroupe d'Astrophysique des Hautes Energies, Institut d'Astrophysique et de Géophysique, Université de Liège, Allée du 6 Août, 19c, Bât B5c, 4000 Liège, Belgium X-ray flaring activity from the closest supermassive black holelocated at the center of our Galaxy has been observed since 2000 October 26 thanks to the current generation of X-ray facilities. Recently, in a study of X-ray flaring activity fromusing Chandra and XMM-Newton public observations from 1999 to 2014 and Swift monitoring in 2014, researchers have argued that the “bright and very bright” flaring rate has increased from 2014 August 31. As a result of additional observations performed in 2015 with Chandra, XMM-Newton, and Swift (total exposure of 482ks), we seek to test the significance and persistence of this increase of flaring rate and to determine the threshold of unabsorbed flare flux or fluence leading to any change of flaring rate. We reprocessed the Chandra, XMM-Newton, and Swift data from 1999 to 2015 November 2. From these data, we detected the X-ray flares via the two-step Bayesian blocks algorithm with a prior on the number of change points properly calibrated for each observation. We improved the Swift data analysis by correcting the effects of the target variable position on the detector and we detected the X-ray flares with a 3σ threshold on the binned light curves. The mean unabsorbed fluxes of the 107 detected flares were consistently computed from the extracted spectra and the corresponding calibration files, assuming the same spectral parameters. We constructed the observed distribution of flare fluxes and durations from the XMM-Newton and Chandra detections. We corrected this observed distribution from the detection biases to estimate the intrinsic distribution of flare fluxes and durations. From this intrinsic distribution, we determined the average flare detection efficiency for each XMM-Newton, Chandra, and Swift observation. We finally applied the Bayesian blocks algorithm on the arrival times of the flares corrected from the corresponding efficiency. We confirm a constant overall flaring rate from 1999 to 2015 and a rise in the flaring rate by a factor of three for the most luminous and most energetic flares from 2014 August 31, i.e., about four months after the pericenter passage of the Dusty S-cluster Object (DSO)/G2 close to . In addition, we identify a decay of the flaring rate for the less luminous and less energetic flares from 2013 August and November, respectively, i.e., about 10 and 7 months before the pericenter passage of the DSO/G2 and 13 and 10 months before the rise in the bright flaring rate. The decay of the faint flaring rate is difficult to explain in terms of the tidal disruption of a dusty cloud since it occurred well before the pericenter passage of the DSO/G2, whose stellar nature is now well established. Moreover, a mass transfer from the DSO/G2 tois not required to produce the rise in the bright flaring rate since the energy saved by the decay of the number of faint flares during a long period of time may be later released by several bright flares during a shorter period of time. Sixteen years of X-ray monitoring of Sagittarius A* Sixteen years of X-ray monitoring of Sagittarius A*: Evidence for a decay of the faint flaring rate from 2013 August,13 months before a rise in the bright flaring rate Enmanuelle Mossoux <ref>,<ref> Nicolas Grosso <ref> Received/ Accepted ===========================================================================================================================================================================§ INTRODUCTION The center of the Milky Way hosts the closest supermassive black hole (SMBH) namedat a distance of 8kpc <cit.>. The bolometric luminosity ofis 10^-9 times smaller than the Eddington luminosity L_Edd=3× 10^44 erg s^-1 <cit.> for a SMBH mass of M=4 × 10^6 M_ <cit.>. Above this steady emission,experiences some temporal increases of flux in X-rays <cit.>, near-infrared <cit.> and sub-millimeter/radio <cit.>. The near-infrared flare spectra are well reproduced by the synchrotron process <cit.> and the sub-millimeter/radio may be explained by the adiabatically expanding plasmon model <cit.>, but the radiative processes for the creation of the X-ray activity are still debated. Moreover, several mechanisms can explain the origin of eruptions in X-rays and infrared: a shock produced by the interaction between orbiting stars and hot accretion flow <cit.>,a hotspot model <cit.>, a Rossby instability producing magnetized plasma bubbles in the hot accretion flow <cit.>,an additional heating of electrons near the black hole due to processes such as accretion instability or magnetic reconnection <cit.>,an increase of accretion rate when some fresh material reaches the close environment of the black hole <cit.>, and tidal disruption of asteroids <cit.>.The study of a large number of flares is valuable to constrain the radiative processes and emission mechanisms at the origin of the flaring activity from . Moreover, the survey of the Dusty S-cluster Object (DSO)/G2 on its way towardhas increased the number of observations of the SMBH. <cit.> showed that DSO/G2 is an 1–2M_⊙ pre-main sequence star with an accretion disk producing the Brγ emission line by magnetospheric accretion onto the stellar photosphere and has survived to its pericenter passage at about 2000R_s fromon 2014 April 20 (2014 March 1–2014 June 10).The first statistical study on the X-ray flares ofwas made by <cit.> thanks to the 2012 Chandra X-ray Visionary Project (XVP). During this campaign 39 flares with a 2–10keV observed luminosity larger than 10^34 erg s^-1 were detected using a Gaussian flare fitting on the light curves binned on 300s, resulting in an observed X-ray flaring rate of 1.1^+0.2_-0.1 flare per day. The flares detected with the Gaussian fitting method are limited to a minimum duration of 400s and a minimum peak count rate of 0.015ACIS-S3count s^-1 (corresponding to a mean flux in 2–8keV of 0.6× 10^-12 erg s^-1 cm^-2 with their spectral parameters)due to the Poissonian noise of the non-flaring light curve and the limitations that the authors put on their Gaussian shape to avoid any spurious detection. <cit.> also tested the Bayesian blocks algorithm used by <cit.> to analyze the individual photon arrival times (Scargle 2002, priv. comm.) and detected 45 flares, 34 of which were also found with their Gaussian fitting method.<cit.> studied the flaring rate with the Python implementation of the Bayesian blocks algorithm[This Python program can be found at https://jakevdp.github.io/blog/2012/09/12/dynamic-programming-in-python/https://jakevdp.github.io/blog/2012/09/12/dynamic-programming-in-python/] <cit.> by merging the XMM-Newton and Chandra observations wherewas observed with an off-axis angle lower than 2 from September 1999 to October 2014 and the 2014 Swift observations.These authors reported an increase of the bright and very bright flaring rate (corresponding arbitrarily to flares with an absorbed fluence larger than 50× 10^-10 erg cm^-2) from 2014 August 31 until the end of the 2014 X-rays observations on November 2 with a level of 2.52 flare per day, i.e., 9.3 times larger than the bright flaring rate observed from 1999 to 2014 August 31. However, they only used the 2014 Swift monitoring to determine the change of flaring rate; but from 2006 to 2013, six X-ray flares were detected during the Swift monitoring of 985ks <cit.>, which should be included to investigate the significance of the detection of the flaring rate change.Finally, <cit.> also carried out statistical studies on the X-ray flares observed by Chandra from 1999 to 2012. They detected the X-ray flares using a Gaussian fitting on the individual photon arrival times. The detection efficiency of this method was presented in their Fig. 3 as a function of the flare duration and fluence. This method becomes more efficient as the flare fluence increases but is less efficient for the detection of flares longer than 10ks.However, these different studies present several issues in their data analyses, especially forflare detection. Firstly, the authors never correct of the bias of their detection methods[In <cit.>, the efficiency of their detection method is computed but never used.], which would lead to an intrinsic flaring rate that is higher than those observed. Indeed, all of the detection methods presented above are less efficient in the detection of the faintest and shortest flares. This issue is very important for the simultaneous study of data from different instruments (for example, inand ) that have different sensitivities and angular resolutions leading to different efficiencies forflare detection and an incoherence in the overall flaring rate.Secondly, the Bayesian blocks method uses a prior on the number of changes of the flaring rate (named change point by ) to control the rate of false positive detections. As stated by <cit.>, this prior “depends on only the number of data points and the adopted value of [false positive rate]”. It thus needs to be calibrated using simulations of event lists containing the same number of counts as those studied. The Python implementation of the Bayesian blocks algorithm used by <cit.> works with the geometric prior given in Eq. 21 of <cit.>. However, as told by <cit.>, this geometric prior was obtained for a given range of number of events (which was unfortunately unspecified). This may explain the inconsistency between the false positive rate adopted by <cit.> and their resulting false detection probability appearing in their Sect. 5.5. Indeed, they tested how many times a spurious change of flaring rate is detected by simulating event lists containing the same number of flare arrival times drawn in a uniform distribution and applying the Bayesian blocks algorithm to determine how many times a change of flaring rate is detected. Using a false positive rate of 0.3% and the geometric prior, they reported a probability of false detection of 0.1%, which points out an unreliable calibration between the false positive rate and the prior.Thirdly, <cit.> used WebPIMMS for the computation of the flare flux. But WebPIMMS considers the effective area and the redistribution matrix computed for an on-axis source and for the full detector field of view. However, the flare spectra were extracted from circular regions of 1.25 or 10 radius centered on . Since the point spread function (PSF) extraction fraction is not corrected by WebPIMMS, the inferred unabsorbed flux is systematically underestimated by these authors. Finally, none of these previous works studied the impact on the flare detection efficiency of the overlap between the flare duration and observing time, i.e., the edge effects when a flare begins before the observation start or ends after the observation stop.In this work, we use the two-step Bayesian blocks method <cit.> with a proper prior calibration since we believe this method to be most efficient for flare detection. Indeed, contrary to the Gaussian fitting method used by <cit.>, the Bayesian blocks method is applied directly on the event lists and is able to detect flares that are shorter than 400s (see Fig. A.2 of ). Moreover, comparing the efficiency of the method of <cit.> with those of the Bayesian blocks method, we stress that the Bayesian blocks method is more efficient for the detection of long flares.For the shortest and faintest flares, the method of <cit.> detects more features than the Bayesian blocks method but these authors did not control their false positive rate. We also determined the flare detection efficiency by taking the edge effects into account in our simulations. We also use the spectral fitting program ISIS <cit.> and the effective area and redistribution matrix files associated with the spectrum extraction region to consistently compute the mean unabsorbed flux of the X-ray flares.Owing to the 2015 Swift monitoring and 2015 Chandra and XMM-Newton observations, there are about 459ks of additional observations ofallowing us to investigate the persistence and significance of the bright flaring rate argued by <cit.> based on only 200ks of observations from 2014 August 31. After reducing the 1999–2015 data of XMM-Newton, Chandra, and Swift (Sect. <ref>), we search for flares using the two-step Bayesian blocks algorithm <cit.> for XMM-Newton and Chandra and the method proposed by <cit.> that is optimized for the Swift observations (Sect. <ref>). We then compute their mean unabsorbed fluxes with the spectral parameters computed by <cit.> for the brightest X-ray flares (Sect. <ref>). This method of taking the effects of the off-axis angle into account allows us to study a large number of observations without a drastic limitation on the off-axis angle of . To correct the flare detection bias for each observation, we compute the flux and duration distribution of the flares observed with XMM-Newton and Chandra and correct it from the merged detection efficiency of the Bayesian block algorithm to determine the intrinsic flux-duration distribution (Sect. <ref>). From this intrinsic distribution, we compute the average flare detection efficiency associated with each XMM-Newton, Chandra, and Swift observation and investigate the existence of a flux or fluence threshold leading to a change in the unbiased X-ray flaring rate observed from 1999 to 2015 using the Bayesian blocks algorithm and the relevant prior calibration (Sect. <ref>). We discuss the physical origin of a change of flaring rate in Sect. <ref> and summarize our results in Sect. <ref>. § OBSERVATIONS AND DATA REDUCTION In this work, we extend the flaring analysis to the 1999–2015 XMM-Newton and Chandra observations, wherewas observed with an off-axis angle lower than 8, and to the overall 2006–2015 Swift observations sincewas mainly observed with an off-axis angle lower than 8. Theoretically, our data reduction and analysis methods do not have any limitations on the off-axis angle but considering larger off-axis angles might lead to more confusion with the diffuse emission of the Galactic center. We retrieved the public observations ofmade with XMM-Newton, Chandra, and Swift from the XMM-Newton Science Archive (XSA)[http://www.cosmos.esa.int/web/xmm-newton/xsahttp://www.cosmos.esa.int/web/xmm-newton/xsa], the Chandra Search and Retrieval interface (ChaSeR)[http://cda.harvard.edu/chaserhttp://cda.harvard.edu/chaser] and the Swift Archive Download Portal[http://www.swift.ac.uk/swift_portalhttp://www.swift.ac.uk/swift_portal], respectively. Our XMM-Newton, Chandra, and Swift data sample has a total exposure time that is about 2.1Ms longer than the 6.9Ms considered previously. §.§ XMM-Newton observationsXMM-Newton <cit.> has observed the Galactic center since 2000 September with the EPIC/pn <cit.> and EPIC/MOS1 and MOS2 <cit.> cameras. The 54 observations offrom 2000 September to 2015 April have a total effective exposure of about 2.2 Ms.The observation start and end times corresponding to the earliest good time intervals (GTI) start and the latest GTI stop of the three cameras are reported in Table <ref> in Universal Time (UT). The conversion from the Terrestrial Time (TT) registered aboard XMM-Newton to UT is computed using NASA's HEASARC Tool xTime[The website of xTime is: http://heasarc.gsfc.nasa.gov/cgi-bin/Tools/xTime/xTime.plhttp://heasarc.gsfc.nasa.gov/cgi-bin/Tools/xTime/xTime.pl]. The duration of the observations reported in Table <ref> is the sum of the GTI. Most of the observations were made in frame window mode with the medium filter[Exceptions are the 2000 September 21, 2001 September 4, 2004 March 28 and 30 observations, where EPIC/pn was in frame window extended mode leading to a lower time resolution (199.1ms instead of 73.4ms); the 2014 April 3 observations, where EPIC/MOS1 and MOS2 observed in small window mode leading to a better time resolution (0.3s instead of 2.6s) but a smaller part of the central CCD observing (100 × 100 pixels); the 2002 February 26 and October 3 observations, where EPIC/pn observed with the thick filter and the 2008 March 3 and September 23, where the three cameras observed with the thin filter.].The XMM-Newton data reduction is the same as presented in, for example, <cit.>. We created the event lists for the MOS and pn cameras using theandtasks from the Science Analysis Software (SAS) package (version 14.0; Current Calibration files of 2015 June 13). We suppressed the time ranges when the soft-proton flare count rate in the full detector light curve in the 2-10keV energy range is larger than 0.009 and 0.004 count s^-1 arcmin^-2 for pn and MOS, respectively. For the MOS cameras, we selected the single, double, triple, and quadruple events () and used the bit maskto reject the dead columns and bad pixels. For the pn camera, we selected the single and double events () and used the more drastic bit maskto reject the dead columns and bad pixels.The source+background (src+bkg) extraction region is a 10-radius disk centered on the Very-Long-Baseline Interferometry (VLBI) radio position of : RA(J2000)=17^h45^m400409, Dec(J2000)=-29^∘ 00 28118 <cit.>. This region allows us to extract 50% of the energy at 1.5keV on-axis. We did not register the EPIC coordinates again since the absolute astrometry for the EPIC cameras (1.2; ) is very small compared to the size of this extraction region and the PSF half power diameter (HPD).For observations in frame window (extended) mode, the bkg extraction region is a ≈ 3× 3 region at ≈ 4-north of . For observations in small window mode, the background extraction region is a ≈ 3× 3 area at ≈ 7-east of(i.e., on the adjacent CCD). The X-ray sources in the background region were detected using the SAS taskand filtered out.§.§ Chandra observationsChandra has observed the Galactic center since 1999 September with the ACIS-I and ACIS-S cameras <cit.>. The 121 observations offrom 1999 September to 2015 October have a total effective exposure of about 5.8 Ms. The effective observation start and end times reported in Table <ref> in UT correspond to the earliest GTI start and the latest GTI stop. The ACIS-S observations of the 2012 XVP campaign, i.e.,2013 May 25, and June 6 and 9 and2015 August 11, were made with the High Energy Transmission Grating (HETG), which disperses the source events on the detector. The ACIS-S observations on 2013 May 12, June 4 and after 2013 July 2 were made with an 1/8 subarray of 128 rows to increase the time resolution in order to reduce the pile-up during the bright flare. The other observations were made with ACIS-I.The data reduction was carried out with the Chandra Interactive Analysis of Observations (CIAO) package (version 4.7) and the calibration database (CALDB; version 4.6.9). The level 1 data were reprocessed via the CIAO scriptwhich creates a bad pixel file, flags afterglow events, and filters the event patterns, afterglow events, and bad-pixel events. For observations without HETG, the src+bkg events were extracted from a 125-radius disk centered on the VLBI radio position of . For the HETG observations, the diffraction order was determined with the CIAO task . We then extracted the zero-order events from the 125-radius disk centered onand the ±1-order events from wide rectangle of 25 width centered on theposition <cit.>. The position angles of the dispersed spectra are given in the region extension of the level 1 data event list. The bkg region is a 82 disk at 054 south of .§.§ Swift observationsSwift <cit.> has regularly observed the Galactic center since 2006 with the X-ray telescope (XRT) (PI: N. Degenaar). This camera observes between 0.2 and 10keV in windowed timing mode or photon counting mode depending on the brightness of the source. The former observing mode uses only 1D imaging to increase the timing resolution of the data, whereas the latter observing mode delivers the 2D photon positions for the entire XRT field of view (236 ×236) with a time resolution of 2.5s. The XRT has an effective area of 110cm^2 at 1.5keV, an absolute-astrometry uncertainty of 3, and a spatial resolution of 18 HPD on-axis at 1.5keV <cit.>. The log of each yearly campaign is given in Table <ref>.§.§.§ Swift monitoring of The results of the Swift monitoring ofuntil 2011 October 25 were reported in <cit.>. The authors computed the mean of the light curve ofbetween 0.3 and 10keV of 0.011 count s^-1 with a standard deviation of σ=6.7× 10^-3 counts s^-1 . Six X-ray flares with an unabsorbed luminosity larger than 7 × 10^34 erg s^-1 were observed during these 821 ks of observation using a GTI-binned detection method with a 3σ threshold leading to a flaring rate of 0.63 flares per day. The results of the 2012, 2013, and 2014 Swift monitoring were reported in <cit.>. One flare was observed on 2014 September 9 with an unabsorbed luminosity of (1.4±0.4) × 10^35 erg s^-1 during the 510 ks of these three years of observations.On 2016 February 6, a new X-ray transient SWIFT J174540.7-290015 was detected at 16” north ofwith a 2–10keV flux of 1.0× 10^-10 ergs^-1 <cit.>. This source was identified as a low-mass X-ray binary located near or beyond the Galactic center <cit.>. On 2016 May 28, a new X-ray transient SWIFT J174540.2-290037 was detected in the Swift observations at 10” south ofwith an unabsorbed 2–10keV flux of about (7±2)×10^-11 erg s^-1 cm^-2 <cit.>. Since these two new transient sources have a large X-ray flux showing long-term variations, they contaminate thelight curves observed by Swift. The large flux variations observed in the short-exposure light curve frommay thus not be identified as aflare or an accretion burst from the transient sources. We thus only use the Swift observations from 2006 to 2015 to study theflares.§.§.§ Improving the data reduction methodWe reprocessed the level 1 data of the Swift observations made in photon counting mode with the data reduction method of <cit.>. We used the HEASOFT task(v0.13.1) and the calibration files released on 2014 June 12 to reject the hot and bad pixels and select the grades between 0 and 12. From the resulting level 2 data, we used the HEASOFT task(v2.4c) to extract events recorded in a disk of 10 radius centered on the VLBI radio position of . Since Swift is on a low-Earth orbit located below the radiation belts, the instrumental background caused by the soft-proton flares is negligible and we thus do not need a background extraction region. The target position on the Swift detector is not fixed. Indeed, the off-axis angle ofcan be as large as 105 (corresponding to the edge of the field of view), leading to an increase of the PSF width and vignetting; moreover,may be located close to a bad column or bad pixel causing event losses. To improve the <cit.> data reduction, we thus correct the event losses from the variable PSF and vignetting at 2.77keV (the median energy emitted in the 10 extraction region) running the HEASOFT task(v0.3.8). This task computes the correction factors that have to be applied to the light curve count rates for each 10s interval. Figure <ref> shows the mean correction factor computed for each Swift observation as a function of the off-axis angle of . This correction factor is different from one observation to an other, varying from 2 to 24. The correction factor is minimum on-axis with a slightly increasing trend with the off-axis angle because of the increase of the PSF width and the vignetting. The mean value of the correction factor is 2.8 but the correction factor increases whenis located close to a bad column or pixel leading to a large standard deviation of the correction factor (2.1). Applying the correction factors on the count rates fromleads to a higher non-flaring level compared to those computed in <cit.>: for the observations between 2006 and 2011, when there is no contamination by transient sources, we find an average count rate level of about 0.027±0.004 counts s^-1 in the 2–10keV energy band (see Fig. <ref>) instead of 0.011±0.007 counts s^-1 in the 0.3–10keV energy band. This increase of the corrected non-flaring level would lead to a decay of the flare detection efficiency by the Bayesian blocks algorithm but the count rate standard deviation is 1.6 times lower than computed before since we corrected the count rate bias owing to the bad pixels and dead columns, the PSF extraction fraction, and the vignetting. § SYSTEMATIC FLARE DETECTION §.§ XMM-Newton and Chandra observationsTo detect X-ray flares observed with XMM-Newton and Chandra, we applied the Bayesian blocks method developed by <cit.> and refined by <cit.> on the individual photon arrival times of the src+bkg and bkg event lists with a false positive rate for the flare detection of 0.1%. The result of the Bayesian blocks algorithm is an optimal segmentation of the source event list with blocks of constant count rate separated by the change points. The event list preparation for the application of the Bayesian block algorithm is the same as explained in <cit.>. For the XMM-Newton observations, we first took care to associate the arrival time of each event with the center of the observational frame during which it was recorded since the randomization of the event arrival time in the frame duration by the data reduction tasks is arbitrary and not reproducible. The events are thus separated by an integer number of frame durations.If several events were recorded during the same frame, we considered that these events are characterized by the same arrival time.We then filtered out the frames affected by ionizing particles (i.e., the bad time intervals) by merging the GTIs to obtain a continuous event flux as observed by XMM-Newton or Chandra.We divided the continuous event list into Voronoi cells whose start and end times are half of the interval between two adjacent events. We defined the beginning and end of the first and last cell as the observation start and stop, respectively. The count rate in each Voronoi cell is thus the total number of events in the cell divided by the cell duration. We then correctedthe CCD livetime (i.e., the ratio between the integration time and CCD readout time) by applying a weight on the duration of the Voronoi cells.To apply the Bayesian blocks algorithm with a consistent false positive rate, we calibrated the prior number of change points (ncp_prior) for the number of events in the src+bkg and bkg event lists and the desired false positive rate. Following the method proposed by <cit.> and used in <cit.>, we simulated 100 Poisson fluxes with a mean count rate corresponding to the non-flaring level of each observation and containing a number of uniformly distributed events that is the same as in the considered event list.For each set of 100 simulations, we increasedncp_prior from 3 to 8 by a step of 0.1 and we computed the number of false positives detected. The value of ncp_prior that corresponds to the considered event list is thus the value that retrieves the desired false positive rate (here, p_1 = 0.03, leading to a false positive rate for the flare detection of 1-p_1^2=0.1%).We applied the two-step Bayesian blocks algorithm of <cit.> on the resulting event lists to correct for any detector flaring background as proposed by <cit.>. We first applied the algorithm on the src+bkg event list, whereas the bkg contribution at each event arrival time is estimated by applying the Bayesian blocks algorithm on the background event list. We then applied the algorithm on the src+bkg event list where the Voronoi intervals are weighted by the ratio of the src+bkg and background-subtracted src+bkg contributions. The non-flaring level ofis defined by the count rate of the longest Bayesian block (leading to the lower error on the count rate) while the flares are associated with the higher Bayesian block count rates. The mean count rate of a flare is the mean count rate of the flaring blocks subtracted from the non-flaring level. The flares observed by Chandra and XMM-Newton and detected by the Bayesian blocks algorithm are represented in <ref> and Fig. <ref>. A comparison with the flare characteristics observed by <cit.> is given in Appendix <ref>.In 2004, the XMM-Newton observations revealed an artificial increase of the non-flaring level due to the transient X-ray emission of the low-mass X-ray eclipsing binary located at 29 south from<cit.>. Moreover, thelight curve also showed dips due to the eclipses of the X-ray binary that were retrieved by the Bayesian block algorithm (see the fourth panel of Fig. <ref>). During the observations made from the 2013 April 25, the non-flaring level ofwas also artificially increased because of the burst phase of the Galactic center magnetarlocated at only 24 southeast from<cit.>. The most prominent effect of the increase of the non-flaring level is a decay of the sensitivity to the detection of the faintest flares <cit.>. §.§ Swift observations Owing to the low Earth orbit, the duration of Swift observations are about 1ks, which is short compared to the flare observed durations (from some hundred of seconds to more than 10ks). We tested the effect of this short exposure on the detection probability of the flares with the Bayesian blocks algorithm. We first simulated two non-flaring event lists with a typical exposure of 1ks and a Poisson flux with a non-flaring level of 0.027 counts s^-1 in the 2–10keV energy range. We then simulated a third event list with a Gaussian flare above this non-flaring level using for the sampling 30 mean count rates from 0.035 to 0.1 counts s^-1 and 30 durations from 300s to 10ks in logarithmic scale. We finally extracted a time range of 1ks from different part of the simulated flare to create a typical Swift event list of a flare; the center of the time range is defined to divide the flare duration into 10 time ranges. We applied the Bayesian blocks algorithm on the three (non-flaring, flaring, and non-flaring) concatenated event lists and computed how many times the algorithm found two change points. The mean count rates of the flares are converted to the mean unabsorbed fluxes using the averaged conversion factor between the mean count rates and mean unabsorbed fluxes in the 2–10keV energy band of 293.5× 10^-12 erg s^-1 cm^-2/XRT count s^-1 , which is computed for N_H=14.3× 10^22 cm^-2 and Γ=2 via ISIS, and the effective area, which is computed for the 10 extraction region and the redistribution matrix file that corresponds to the 2006 September 15 Swift observation wherewas on-axis. The resulting detection probability, shown in the top panel of Fig. <ref>, has two different regimes with a small range of mean unabsorbed flux where the detection probability jumps from 20% to 100%. For flare durations longer than 800s, the X-ray flares are either nearly undetected (detection probability lower than 20%) or always detected with a mean unabsorbed flux limit of about 0.044 counts s^-1 , which corresponds to 13.2× 10^-12 erg s^-1 cm^-2. The flare detection efficiency decreases with the decay of the flare duration with a 100% detection probability at 0.044 counts s^-1 for a flare duration of 800s and 0.065 counts s^-1 for a flare duration of 300s. The Bayesian blocks algorithm thus detects flares with a duration longer than the observing time less efficiently and detects only flares with a mean unabsorbed flux larger than 13.2× 10^-12 erg s^-1 cm^-2 when the flare duration is larger than the observation exposure.To assess the detection efficiency of the <cit.> method for the Swift observations, we simulated several 1ks event lists as done previously, but we now work on a logarithmic mean unabsorbed flux grid of 30 points between 0.6 and 40.0× 10^-12 erg s^-1 cm^-2 and a logarithmic duration grid of 30 points between 300s and 10.1ks to cover the duration and flux ranges of the overall observed flares (see next sections). These simulations are carried out for each Swift non-flaring level observed from 2006 to 2015. We then applied the <cit.> detection method to compute how many times the flare is detected. The resulting detection efficiencies p_obs for the 2006–2012 observations (i.e., without transient sources) are shown in the bottom panel of Fig. <ref>. As for the flare detection with the Bayesian blocks method, the detection efficiency jumps from 20 to 100% in a small range of mean unabsorbed flux. But the flux limits for 100% detection (about 7× 10^-12 erg s^-1 cm^-2) in the <cit.> detection method are well below those of the Bayesian blocks method, making the former more efficient for flare detection with Swift.Therefore, we used the GTI-binned method of <cit.>, which is optimized to detect the X-ray flares for the Swift observing setup. We first selected the src events in the 2-10keV energy band to build thelight curves binned on each GTI. We rejected the GTIs whose exposure is lower than 100s since the error bar on the count rate during this short exposure is large. For the observations between 2006 and 2012, the non-flaring level from the src event list in each yearly campaign is computed as the ratio between the number of events recorded during each campaign and the corresponding yearly exposures. A light curve bin is associated with a flare if the lower limit on the count rate in this observation is larger than the non-flaring level of the corresponding yearly campaign plus three times the standard deviation of the yearly campaign light curve. During the 2013, 2014, and 2015 Swift campaigns, the non-flaring level observed in thelight curves displays large variations due to the presence of the Galactic center magnetar (see Fig. 2 of ). The non-flaring level during these campaigns is fitted using two exponential power laws following <cit.>[We cannot directly use their fit since they did not correct from the losses caused by the bad pixels and dead columns, the PSF extraction fraction, and the vignetting.], i.e., [ CR= (0.246±0.009) e^-t-t_0/(66.2±3.5) d; +(0.012±0.05) e^-t-t_0/(79.0±9.7) d; +(0.027±0.004) counts s^-1, ] with t_0=56406 MJD. During these three campaigns, a flare is detected if the mean count rate during the observation is larger than this count rate fit plus three times the 1σ error. The mean count rate of a flare detected with Swift is the mean count rate of the observation subtracted from the non-flaring level. The flares detected with Swift are represented in Fig. <ref>. §.§ X-ray flares detected from 1999 to 2015The time of the start and end of the flares observed by XMM-Newton, Chandra, and Swift,as well as the non-flaring levels, are given in Tables <ref>, <ref> and <ref> of Appendix <ref>, respectively. In total, 107 X-ray flares were detected between 1999 and 2015:19 flares with XMM-Newton, 80 flares with Chandra, and 8 flares with Swift. The mean flare duration is 2739s, the standard deviation is 2210s, and the median is 2018s, whichimplies that the flare durations have a nearly homogeneous distribution without preferred value. The cumulative number of flares is given in Fig. <ref> (blue line) as a function of time (with observing gaps) for Chandra (top panel), XMM-Newton (middle panel), and Swift (bottom panel). The flare times are computed as (t_start+t_end)/2 with t_start and t_end indicating the start and end times of the flare. We also represent the cumulative exposure (orange line) for each instrument in this figure. The mean flaring rate is then computed as the ratio between these two curves (black line). The mean flaring rates observed by each instrument on 2015 November are different; these are 1.15±0.13, 0.78±0.17, and 0.45±0.16 flare per day for Chandra, XMM-Newton, and Swift, respectively. This is because of the different sensitivity of the cameras and the different non-flaring levels observed by the instruments, which depend on both the instrument sensitivity and angular resolution. It is thus necessary to correct the detection bias due to these heterogeneous sensitivities to study consistently the flaring rate obtained by the combination of three instruments.To assess the detection efficiency for the three instruments, we used two characteristics of flares that are independent of the instruments: the flare duration (already computed in this section) and mean unabsorbed flux (see Sect. <ref>). § X-RAY FLARE FLUXES To correctly compute the mean unabsorbed fluxes of the X-ray flares observed with XMM-Newton and Chandra, we extracted their spectra, ancillary files (arf), and response matrix files (rmf) with the SAS scriptfor XMM-Newton and the CIAO scriptfor Chandra. For the Swift observations, because of the short exposure time, we extracted the flare spectra during the entire observation via the HEASOFT task , and we created the corresponding arf via(version 0.6.3). The rmf were taken in the calibration database[http://heasarc.gsfc.nasa.gov/FTP/caldb/data/swift/xrt/cpf/rmf/http://heasarc.gsfc.nasa.gov/FTP/caldb/data/swift/xrt/cpf/rmf/]. The non-flaring spectrum was extracted from the closest in-time observation.We grouped the flare spectra with a minimum of one count withto fit them with an absorbed power law created with TBnew <cit.> and pegpwrlw with a dust scattering modeled thanks to dustscat <cit.> using the Cash statistic <cit.> in . For the XMM-Newton and Swift observations, we fit the spectra with the values of the hydrogen column density (N_H) and the power-law index (Γ) fixed to those computed for the two brightest X-ray flares observed with XMM-Newton and the 2012 February 9 bright Chandra flare: N_H=14.3× 10^22 cm^-2 and Γ=2 <cit.>. Only the mean unabsorbed flux between 2 and 10keV is a free parameter. The resulting mean unabsorbed fluxes of each X-ray flare observed by XMM-Newton and Swift are given in Tables <ref> and <ref> of Appendix <ref>.For the Chandra observations, the pile-up must be taken into account. The pile-up is due to the arrival of more than one photon per pixel island during the same readout frame. The multiple photons are either recorded as a unique photon of merged (higher) energy or they produce the pattern (or grade) migration of the event leading to these photons not classified as an X-ray event anymore. In the latter case, a dip appears in the center of the PSF image of a bright source.We use the pile-up model of <cit.> that is available inwith the photon migration parameter α=1 <cit.> for a PSF fraction of 95% corresponding to the 125 extraction region. We fit the spectra with this pile-up model applied on the absorbed power-law model with the fixed N_H and Γ reported above and a free mean unabsorbed flux between 2 and 10 keV. Table <ref> of Appendix <ref> reports the resulting mean unabsorbed fluxes observed by Chandra between 2 and 10keV.Three flares observed with XMM-Newton and Chandra begin before the start of the observation and three other flares end after the end of the observation. According to the phase of the flare that is not observed, this leads to a lower or upper limit on the mean unabsorbed flux. Indeed, assuming a Gaussian flare, if we only observe the end of the decay phase or the beginning of the rise phase, the resulting mean unabsorbed flux is a lower limit on its actual value; if we observe the end of the rise phase and decay phase or the rise phase and the beginning of the decay phase, the resulting mean unabsorbed flux is an upper limit on its actual value. For the eight flares observed with Swift, the duration of the flares are associated with the observing time, thereby leading to a lower limit on the mean unabsorbed flux if the flare duration is lower than the exposure. If the flare duration is larger than the exposure, the orientation of the limit depends on whichpart of the flare is observed. Hereafter, we consider these lower or upper limits on the mean unabsorbed flux as the actual value of the flare flux. The averaged mean unabsorbed flux for the X-ray flares observed by XMM-Newton, Chandra, and Swift is 8.4× 10^-12 erg s^-1 cm^-2 with a standard deviation of 10.0× 10^-12 erg s^-1 cm^-2 while the median is 4.5× 10^-12 erg s^-1 cm^-2. The observed distribution of the mean unabsorbed flux is thus skewed toward the faintest flares. However, the different detection sensitivities of the instruments according to the flare mean unabsorbed flux and duration biases the observed distribution toward the highest and longest flares. We thus need to correct of the detection sensitivities to study correctly the merged duration and mean unabsorbed flux distribution. § INTRINSIC FLARE DISTRIBUTION To determine the intrinsic flare distribution, we computed the density distribution of the flares observed only with XMM-Newton and Chandra since the characteristics of the flares observed with Swift are not sufficiently constrained. We then corrected this observed flare density of the merged detection bias of XMM-Newton and Chandra. §.§ Observed flare distribution The observed flare density is computed from the mean unabsorbed fluxes and durations of the X-ray flares observed with XMM-Newton and Chandra from 1999 to 2015 using the Delaunay tessellation field estimator (DTFE; ). We constructed the minimum triangulation of the Delaunay tessellation (blue lines in the top left panel of Fig. <ref>). The density associated with a given flare position is then computed via the Delaunay triangles connected to this flare and conserving the total flare number in the reconstructed density field. We computed for each flare, i, the area W_i=∑ A_k with, A_k, which is the area of the triangle k whose the vertex is the flare i at the location x_i. The flare density per surface unit in the mean unabsorbed flux duration plane that is associated with the flare i is d_i=3/W_i . The discretized map of the flare density is linearly interpolated inside the convex hull of the observed flare set at a point x in the Delaunay triangle m: d_obs=d_i+ ▽ d|_m(x-x_i) with d|_m the estimated constant density gradient within m. The resulting filled contour map of the observed flare density distribution is shown in the top right panel of Fig. <ref> with the density levels of the observed flares in logarithmic scale. §.§ X-ray flare detection efficiencyThe detection efficiency of the X-ray flares depends on the instrument sensitivity, non-flaring level, and observing time. We used the flare durations and mean unabsorbed fluxes to compute the detection efficiency of the Bayesian blocks algorithm at each point in this 2D parameter space for each XMM-Newton and Chandra observation (p_obs≤ 1). We defined a logarithmic mean unabsorbed flux sampling of 30 points between 0.6 and 40.0× 10^-12 erg s^-1 cm^-2 and a logarithmic duration sampling of 30 points between 300s and 10.5ks to cover the duration and flux ranges of the overall observed flares. We only analyzed the 2D grid points located inside and close to the convex hull (see bottom left panel of Fig. <ref>). The mean unabsorbed fluxes were converted into mean count rates using the average ratio of the count rate to unabsorbed flux computed for the flares detected with each instrument, i.e., 111.3, 248.2, and 148.1× 10^-12 erg s^-1 cm^-2/count s^-1 for XMM-Newton/EPIC pn, Chandra/ACIS-S3 subarray, and Chandra/ACIS-I, respectively. For each grid point, we simulated 200 event lists with Poisson flux reproducing a Gaussian-flare light curve with different mean count rates and durations (defined from -2σ to +2σ as inwith σ the Gaussian standard deviation) above each of these non-flaring levels. The detection efficiency depends strongly on the overlap between the flare duration and observing time, i.e., the edge effects. We thus first define the time range of the simulated event list as T_exp+2T_flare, where T_exp is the observing time and T_flare is the flare duration. We then drew, for each simulation, the time of the flare maximum as uniformly distributed between T_flare/2 and T_exp+3T_flare/2 and simulated the event list (see Appendix <ref> for details about the event list simulations). We finally selected the events whose arrival times are between T_flare and T_exp-T_flare to create our final event list. We then applied the Bayesian blocks algorithm on each of these final event lists to compute how many times the algorithm detects the flare for a false positive rate for the flare detection of 0.1%.Since the flares are described with parameters that are independent from the telescope instruments, we were able to combine the p_obs of each instrument and each observation computed on the same grid. We firstly weighted the local detection efficiencies according to the exposure time of the corresponding observation since the impact of the detection efficiency on the number of observed flare depends on the exposure. We finally summed the weighted local detection efficiencies to determine the merged (weighted mean) local detection efficiency of XMM-Newton and Chandra shown in the bottom left panel of Fig. <ref> with the grid points. The merged local detection efficiency along the border of the convex hull was computed by a linear interpolation between the merged local detection efficiency on either side of the convex hull.§.§ Correction of the observed flare distribution The observed flare distribution was finally corrected from the merged local detection efficiency to compute the intrinsic flares distribution. The observed flare distribution at each point grid x was then corrected by the merged local detection efficiency p_merged(x) ≤ 1 as d_intr(x)=d_obs(x)/p_merged(x) (see Eq. 17 of ). The intrinsic flare distribution is shown with filled contour in logarithmic scale in the bottom right panel of Fig. <ref>. The intrinsic flare distribution is now highest for the faintest and shortest flares. § TEMPORAL DISTRIBUTION OF THE X-RAY FLARES FROM 1999 TO 2015 We then combined the overall XMM-Newton, Chandra, and Swift observations and removed the observational gaps to create a continuous exposure containing the times of the 107 flares detected.The observational overlays were also removed to keep only the most sensitive instrument. Figure <ref> shows the flare arrival times without observing gaps over the total exposure time of 107.6days (corresponding to 9.3Ms). The height of each vertical line representing a flare corresponds to the mean unabsorbed flux (top panel) and fluence (mean unabsorbed flux times duration; bottom panel) between 2 and 10keV. We thus observe a flaring rate of 0.98± 0.09 flare per day, which is lower but statistically consistent with the flaring rate deduced by <cit.>, which was limited to the 2012 Chandra XVP campaign, since XMM-Newton and Swift are less sensitive to fainter and shorter flares. We thus needed to correct the flare count rate from the flare detection bias due to the heterogeneous instrumental sensitivities. §.§ Correction of the sensitivity biasTo correct the temporal flare distribution of the sensitivity bias, for each observation we determined the average flare detection efficiency η_obs by applying the detection efficiencies p_obs computed in the Sect. <ref> for Swift and Sect. <ref> for XMM-Newton and Chandra. The intrinsic flare distribution d_intr(x) at each grid point x is affected by p_obs(x) ≤ 1, thus leading to the observation of only a percentage of this flare density. By computing the ratio between the 2D integral on the convex hull of the intrinsic flare distribution affected by the local detection efficiency for a given non-flaring level and the intrinsic flare distribution, we assessed the average flare detection efficiency η_obs<1 corresponding to this observation,η_obs=∫∫ d_intr(x)p_obs(x) dx/∫∫ d_intr(x) dx.The values of η_obs are reported in Tables <ref>, <ref> and <ref>.We thus obtained a set of merged observations from XMM-Newton, Chandra, and Swift, each containing N ≥ 0 flares, with their corresponding exposure and average flare detection efficiency η_obs. To correct the flaring rate from the instrumental sensitivity, for each observing time T we computed the corrected observing time as T_corr=T η_obs , thus leading to a higher and unbiased flaring count rate in the corresponding observation.Figure <ref> shows the flares times without observing gaps over the total corrected exposure time of 35.6days. §.§ Study of the unbiased X-ray flaring rateWe divided the corrected exposure time in Voronoi cells each containing one flare and whose the separation times are the mean time between two consecutive flares. We applied the Bayesian blocks algorithm on the Voronoi cells with a false positive rate for the change point detection of p_1 = 0.05 and the corresponding ncp_prior=4.17 calibrated for 107 flares uniformly distributed during 35.6 days, i.e., corresponding to a Poisson flux. The overall flaring activity is constant, with a flaring rate of 3.0±0.3 flares per day. This is significantly higher than those computed by <cit.> since we corrected the sensitivity bias.We now investigate the existence of a flux or fluence threshold that leads to a change in the flaring rate.§.§.§ Flux threshold for a change of flaring rate Two methods can be used to look for a flux threshold: first, the top-to-bottom search where, at each step, we remove the flare with the highest unabsorbed flux, but we keep the corresponding exposure time and update the Voronoi cells; and, second, the bottom-to-top search where, at each step, we remove the flare with the lowest unabsorbed flux or fluence. At each step, we apply the Bayesian blocks algorithm with a false probability rate of p_1=0.05 on the resulting flare list and we repeat this operation until the algorithm found a flaring rate change. The ncp_prior is calibrated at each step according to the number of remaining flares to ensure a significance of at least 95% of any detected change point. Since we cannot argue that one of these two methods is better than the other, we tested both.We first performed a top-to-bottom search. A change of flaring rate is detected at 28.5 days, i.e., between the Chandra flare on 2013 May 25 and the second Chandra flare on 2013 July 27 (flares #70 and #72 in Table <ref> and Fig. <ref>) considering only 70 flares with a mean unabsorbed flux lower than or equal to 6.5× 10^-12 erg s^-1 cm^-2 (the less luminous flares) with p_1 = 0.05 and the corresponding ncp_prior=4.18. The resulting Bayesian blocks are shown in the top panel of Fig. <ref> where only these 70 flares are shown. The first block contains 65 flares while the second block contains 5 flares. The flaring rate decreases from 2.3 ± 0.3 to 0.7 ± 0.3 flares per day. By decreasing the false probability rate, this flaring rate change is detected for p_1>0.034, which leads to a significance of1-p_1=96.6%. To compare the two flaring rates, we computed the p-value for the null hypothesis that the flaring rates are the same (i.e., a rate ratio of 1) considering a Poisson process for the flare arrival times <cit.>. The p-value to compare the 65 flares that occurs in 28.5 days and the 5 flares that occurs in 7.09 days is 0.006. The ratio between the two flaring rates is 3.2 and the 95% confidence interval is 1.3-10.3.We then performed the bottom-to-top search by recursively removing the flare with the lowest unabsorbed flux and applying the Bayesian blocks algorithm. We found one change of flaring rate by considering only 66 flares with a mean unabsorbed flux larger than or equal to 4.0× 10^-12 erg s^-1 cm^-2 (the most luminous flares) with p_1 = 0.05 and the corresponding ncp_prior=4.31. The resulting Bayesian blocks are shown in the bottom panel of Fig. <ref> where only these 66 flares are shown. The change of flaring rate happened on 2014 August 31 (33.36 days) between the two XMM-Newton flares #16 and #17 in Table <ref> and Fig. <ref>. The blocks contain 55 and 11 flares that correspond to flaring rates of 1.6±0.2 and 5.0±1.5 flares per day. This flaring rate change is still detected until a false positive rate of p_1 = 0.048 (ncp_prior=4.35), thus leading to a significance for this change point of 1-p_1=95.2%. The p-value comparing the 55 flares that occurs in 33.36 days and the 11 flares that occurs in 2.22 days is 0.005. The ratio between the two flaring rates is 3.0 and the 95% confidence interval is 1.4-5.8.A summary of these results is given in Table <ref>.To assess the probability of a false positive, we simulated a homogeneous Poisson flux of flare arrival times using the method described in Appendix <ref>. We created 500 sets of 107 arrival times uniformly distributed in 0–35.6days. We also considered a constant flux distribution, i.e., we created 500 sets of 107 fluxes uniformly distributed in 0.6-59.4× 10^-12 erg s^-1 cm^-2 that we associated with the flare arrival times. We then performed the top-to-bottom and bottom-to-top searches and recorded the flaring rate changes. We detected a change of flaring rate in both the top-to-bottom and bottom-to-top search for only 42 trials, corresponding to 8.4% of the simulations. In these 42 subsets, none of the change points were detected after 37 and 41 cuts as is the case for our observations; but for 71% of these sets (i.e., 32 sets), the change points were detected after more than 77 trials, which is a greater number than in our observations. This implies that the joint probability to observe a change point in subsamples containing 70 and 65 flares is lower than (1/500)^2=4× 10^-6. Moreover, for 38 of the 42 subsets (90%), the time intervals between the two change points are between -2 and 0days. Only one of the 42 subsets (2%) has a time interval comprised between 4.5 and 5days as observed in our observations. In the light of these results, the probability that the change points found in our observations by the two search methods are due the detection of a false positive is lower than 1.2× 10^-5. We can thus state that the change of flaring rate that we observe is likely due to a change in the flux distribution.§.§.§ Fluence threshold for a change of flaring rateWe carried out the same study with the unabsorbed fluence. We first performed the top-to-bottom search: a change of flaring rate was found considering only 65 flares with an unabsorbed fluence lower than or equal to 128.1× 10^-10 erg cm^-2 (the less energetic flares) with p_1 = 0.05 and the corresponding ncp_prior=4.12. The resulting Bayesian blocks are shown in the top panel of Fig. <ref> where only these 65 flares are shown. The first block contains 60 flares while the second one contains 5 flares. The change of flaring rate happens between the second Chandra flare on 2013 July 27 (flare #72 in Table <ref> and Fig. <ref>) and the first Chandra flare on 2013 October 28 (flare #74 in Table <ref> and Fig. <ref>) (29.59 days). The corresponding flaring rates are 2.0±0.3 and 0.8±0.4 flares per day. We detected this flaring rate change for a decreasing false positive rate until p_1 = 0.049 (ncp_prior=4.14), which leads to a probability that this change of flaring rate is a real rate of 1-p_1=95.1%. The p-value comparing the 60 flares that occur in 29.59 days and the 5 flares that occur in 5.99 days is 0.05. The ratio between the two flaring rates is 2.4 and the 95% confidence interval is 1.0-7.7.For the bottom-to-top search, a change of flaring rate was detected considering only 54 flares with a mean unabsorbed fluence larger than or equal to 91.3× 10^-10 erg cm^-2 (the most energetic flares) with p_1 = 0.05 and the corresponding ncp_prior=4.01. The resulting Bayesian blocks are shown in the bottom panel of Fig. <ref> where only these 54 flares are shown. The two blocks are described by flaring rates of 1.2 ± 0.2 and 4.1 ± 1.3 flares per day. The change of flaring rate happens on 2014 August 31 (33.36 days) between the two XMM-Newton flares #16 and #17 in Table <ref> and Fig. <ref>. This flaring rate change was detected for a decreasing false positive rate until p_1 = 0.049 (ncp_prior=4.03), which leads to a probability that this change of flaring rate is real of 1-p_1=95.1%. This increase of flaring rate for the most energetic flares occurs at the same date as the increase of the flaring rate for the most luminous flares. The p-value comparing the 45 flares that occur in 33.36 days and the 9 flares that occur in 2.22 days is 0.011. The ratio between the two flaring rates is 3.0 and the 95% confidence interval is 1.3-6.2.A summary of these results is given in Table <ref>.By performing the same simulations as described previously, the probability that the change points found for observations by the two search methods are due to the detection of a false positive is lower than 6.5× 10^-5. We can thus state that the change of flaring rate that we observe is likely due to a change in the fluence distribution.§ DISCUSSION Our high flaring rate Bayesian block for the most energetic flares identifies the same five flares that created the increase of flaring rate in <cit.> plus three additional flares observed in 2015 with Chandra (flares #78 and #79) and Swift (flare #8). The start of the high flaring rate happened 131 days (80–181 days) after the DSO/G2 pericenter passage near(computed with the DSO/G2 pericenter passage determined by ). As argued in <cit.>, if some material from DSO/G2 was accreted toward , the increase of flux should not be observed before the end of 2017 considering a pericenter distance of 2000R_s and an efficiency of the mechanism of angular momentum transport of α=0.1. Two interpretations can thus be proposed to explain this increase of flaring rate. Firstly, the increase of flaring rate could be due to the accretion of matter from the DSO/G2 ontoconsidering an efficiency of the mechanism of angular momentum transport of at least 0.6. Secondly, the increase of flaring rate could be explained by the increase of the efficiency of the mechanisms producing the X-ray flares, such as a Rossby instability producing magnetized plasma bubbles in the hot accretion flow <cit.>, additional heating of electrons due to accretion instability or magnetic reconnection <cit.>, orthe tidal disruption of an asteroid <cit.>.Interestingly, the decay of the less luminous and less energetic flares occurs about 300 and 220 days before the DSO/G2 pericenter passage near , therefore, about 13 and 10 months before the increase of the most luminous and most energetic flaring rate, respectively. For comparison, we compute the energy saved during the decay of the flaring rate of less energetic flares occurring between 2013 July 27 and October 28 and the energy lost during the increase of the flaring rate of most energetic flares after 2014 August 31 asE_saved<F_1 ∫^T_t_1Δ CR_1dt=F_1 Δ CR_1(T-t_1)andE_lost>F_2 ∫^T_t_2Δ CR_2dt=F_2 Δ CR_2(T-t_2),where T=35.6 are the corrected days, F_1 and F_2 the fluence thresholds, t_1 and t_2 the corrected days of the change points, and Δ CR_1 and Δ CR_1 the absolute values of the difference on the flaring rate between the first and second block (see Table <ref>). For the less energetic flares, F_1=128.1× 10^-10 erg cm^-2, Δ CR_1=1.2±0.5 flare per day, and T-t_1=6.0±0.8 corrected days, which leads to a saved energy of E_saved<(9.2±4.8)× 10^-8 erg cm^-2. For the most energetic flares, F_2=91.3× 10^-10 erg cm^-2, Δ CR_2=2.8±1.4 flare per day and T-t_2=2.214±0.005 corrected days, which leads to a lost energy of E_lost>(5.6±2.7)× 10^-8 erg cm^-2. Therefore, the energy saved by the decrease of the number of less energetic flares during several corrected days could be released by a few bright flares during a shorter period. This energy could be stored in the distortions of the magnetic field lines and then released during a magnetic reconnection event. This is reminiscent of the behavior of earthquakes, in which stresses produce several small events during a long period of time or may accumulate before releasing in a large event. The input of fresh accreting material from the DSO/G2 is thus not needed to explain this large increase of the most luminous and most energetic flares. § CONCLUSIONS The Swift campaigns and Chandra and XMM-Newton observations offrom 1999 to 2015 have allowed us to compute the intrinsic distribution of the mean unabsorbed flare flux and duration and to study the significance of a change flaring rate. The 96 X-ray flares observed with Chandra and XMM-Newton were detected via the two-step Bayesian blocks algorithm <cit.> and 8 X-ray flares observed with Swift were detected via an improvement of the <cit.> detection method. By correcting the observed flare flux and duration distribution from the merged local detection efficiency of XMM-Newton and Chandra, we have been able to estimate the intrinsic flare flux and duration distributions, which are maximum for the smallest and shortest flares. The flaring rate observed by Chandra, XMM-Newton and Swift together has then been corrected from the average flare detection efficiency of the corresponding instruments.No significant change of flaring rate is found with the Bayesian blocks algorithm considering the overall flares, which lead to an intrinsic flaring rate of 3.0±0.3 flares per day. However, we identify, for the first time, a significant decay of the flaring rate (with probability larger than 95.1%) for the less luminous (fainter than 6.5× 10^-12 erg s^-1 cm^-2) and less energetic (lower than 128.1× 10^-10 erg cm^-2) flares by a factor of 3.2 (1.3–10.3, 95% confidence limits) and 2.4 (1.0–7.7, 95% confidence limits) after 2013 May 25 and 2013 July 27, respectively (see Table <ref>). These decays occur about 300 and 220 days before the pericenter passage of the DSO/G2, which implies that this change of flaring rate is difficult to explain by the passage of the DSO/G2 near .We confirm a significant increase of the flaring rate (with probability larger than 95.1%) for the most luminous (brighter than 4.0× 10^-12 erg s^-1 cm^-2) and most energetic (larger than 91.3× 10^-10 erg cm^-2) flares by a factor of 3.0 (95% confidence limits of 1.4–5.8 and 1.3–6.2), respectively, from 2014 August 31 until 2015 November 2 (i.e., the last observation).The energy released during this increase of bright flaring rate could come from the energy saved during the decay of the faintest flares. The input of fresh accreting material from the DSO/G2 is thus not needed to explain this large increase of the most luminous and most energetic flares. We thank the PIs that obtained since 1999 the X-ray observations ofused in this work. E.M. acknowledge Université de Strasbourg for her IdEx PhD grant. This work made use of public data from the Swift data archive, and data supplied by the UK Swift Science Data Center at the University of Leicester.Swift is supported at Penn State University by NASA Contract NAS5-00136.This research has made use of the XRT Data Analysis Software (XRTDAS) developed under the responsibility of the ASI Science Data Center (ASDC), Italy. This work is also based on public data from the XMM-Newton project, which is an ESA Science Mission with instruments and contributions directly funded by ESA member states and the USA (NASA). This work also uses public data obtained from the Chandra Data Archive. aa§ THE OBSERVATION LOG AND THE X-RAY FLARES DETECTED FROM 1999 TO 2015 § X-RAY FLARES DETECTED FROM 1999 TO 2015 WITH XMM-NEWTON, CHANDRA, AND SWIFT§ COMPARISON WITH PREVIOUS WORKS As explained in the introduction, <cit.> used the Python implementation of the Bayesian block algorithm to detect the X-ray flares observed by Chandra and XMM-Newton from 1999 to 2014. We thus compare the duration and fluence of the flares detected here using the two-step Bayesian blocks algorithm (the red circles in Fig. <ref>) and those that <cit.> detected (the black circles in Fig. <ref>). The flares that were detected in both works are connected with a gray line.During the 37 XMM-Newton observations from 2000 to 2014 that we have in common with <cit.>, we detected 19 flares with a false positive rate for the flare detection of 0.1% whereas they detected only 11 flares with a false positive rate for the flare detection of 0.25%. <cit.> missed six of our flares (the red filled circles in Fig. <ref>)[Their missed XMM-Newton flares are on 2004 March 31 (flare # 3 and #4 in Table <ref> and Fig. <ref>); August 31 (#5); September 1 (#6); 2007 April 4 (#10) and 2009 April 3 (#11).]. Finally, two additional flares were observed during the eight XMM-Newton observations of 2014 and 2015 recently released (the two asterisks in Fig. <ref>). We missed the flare labeled #1 in Fig. 3 of <cit.>.However, this flare was detected by those authors when combining the three instruments of EPIC (MOS1, MOS1, and pn) and was confirmed by the simultaneous observations of the near-infrared counterpart with the Hubble Space Telescope.During the 112 Chandra observations from 1999 to 2014 that we have in common with them, we detected 75 flares whereas they detected only 69 flares. <cit.> missed 13 of our flares (the red filled circles in Fig. <ref>)[Their missed Chandra flares are on 1999 September 21 (flare #1 in Table <ref> and Fig. <ref>); 2000 October 27 (#2); 2006 September 25 (#16); 2012 February 9 (#26); March 17 (#28); May 12 (#38); May 19 (#40); July 21 (#45); July 25 (#50) and 28 (#52); October 17 (#63); 2013 May 25 (#70); and July 27 (#72).]. We considered the 2014 October 20 Chandra flare as a single flare (#15) since the Bayesian block between 14.2 and 14.8 hour is significantly well above the non-flaring level whereas <cit.> considered it as two flares. Moreover, since <cit.> only considered observations wherewas at less than 2 off-axis angle, they did not study the 2011 July 21 Chandra observation where we detect one flare (#25, the asterisk at the corresponding flare index in Fig. <ref>). Finally, four additional flares (#77 to #80) were observed during the last Chandra observations of 2015 recently released (the three last asterisks in Fig. <ref>).The majority of the flares missed by <cit.> have already been reported in previous works. The XMM-Newton flares on 2004 March 31 (#3 and #4) and August 31 (#5) were detected by <cit.> and <cit.>. The XMM-Newton flare on 2007 April 4(#10) was detected and labeled #5 by <cit.>. The Chandra flares on 2012 February 9 (#26), May 12 (#38), July 25 (#50), and October 17 (#63) were detected by <cit.> and <cit.>. The Chandra flares on 2000 October 27 (#2), 2006 September 25 (#16), 2012 May 19 (#40), July 21 (#45), and July 28 (#52) were also detected by <cit.>.<cit.> reported seven putative Chandra flares[Their putative Chandra flares are on 2012 August 4 19:32:37, 2013 August 11 10:04:15, 2013 August 31 16:07:43, 2013 September 20 11:21:00, 2013 October 17 16:12:36, 2014 February 21 00:51:18, 2014 August 30 12:26:19] that we do not confirm with our more robust method (the black filled circles in Fig. <ref>). Their inconsistency in flare detection with respect to previous studies and our work may be explained by their blind use of the geometric prior. This effect is clearly visible with their putative Chandra flares: the black filled circles in Fig. <ref> are clustered when the contribution of the Galactic center magnetarin theevent lists is high (i.e., between 2013 and 20114). Owing to the higher noise level of these observations, the absence of calibration of the prior may lead to spurious detection of blocks with a very small increase of the count rate compared to the non-flaring level and thus very low fluence. Conversely, due to the low signal-to-noise ratio of the Chandra data frombefore 2013, the calibration of the prior is highly sensitive to the number of events in each observation. Therefore, <cit.> missed several Chandra flares owing to the inconsistency between their false positive rate, the number of events in the Chandra observations, and the prior. This effect has a smaller impact on the XMM-Newton observations due to their higher signal-to-noise ratio.For the flares in common, the flare durations are roughly consistent but the improved fluences computed in this work are typically larger than those computed in <cit.> because of their utilization of WebPIMMS for the computation of the flare flux. Indeed, WebPIMMS considers the effective area and the redistribution matrix computed for an on-axis source and for the entire field of view. However, the flare events were extracted from a circular region of 1.25 and 10 radius centered on the source (with a maximum off-axis angle of 2). Since the PSF extraction fraction is not corrected by WebPIMMS, the inferred unabsorbed flux is systematically underestimated by <cit.>. § SIMULATION OF POISSON FLUX TO DETERMINE THE FLARE DETECTION EFFICIENCY We recall that for homogeneous Poisson flux, i.e., a constant mean count rate CR, the average number of recorded events during an exposure T is N=CR× T with a standard deviation of √(N). Therefore, we simulate a constant Poisson flux by first drawing the total number M of events in the simulated event list following a Poisson probability distribution, i.e.,P(M)=N^M/M! e^-N ,and then by drawing M values uniformly distributed between 0 and 1 and sorted by ascending order and multiplying them by T. This two-step method is equivalent to the iterative method of <cit.>, which determines the waiting time before the next event considering their decreasing exponential distribution until the simulated arrival time of the event exceeds the exposure time. Their resulting total number of events thus follows a Poisson distribution.To determine the flare detection efficiency, we consider a Gaussian-shaped flare superimposed on a constant level leading to a non-homogeneous Poisson process. The constant level is characterized by a constant Poisson flux of mean count rate CR during a total observing time T leading to an average number of events N_c=CR× T, whereas the flare light curve peaks at t_peak with a count rate amplitude A_peak leading to an average number of events,N_g = A_peak∫_0^T e^-(t-t_peak)^2/2σ^2dt .The total number of events M in each simulation thus follows a Poisson distribution of mean N=N_g+N_c.We use the inverse method (see , Chapter 7 ofand Fig. 2 of ) based on the reciprocal of the cumulative distribution function (CDF) to simulated the arrival times of these M events. The CDFs for the non-flaring level and for the flare are, respectively,CDF_c(t) = t/T,andCDF_g(t) = A_peak σ/N_g√(π/2)((t_peak/√(2) σ)+(t-t_peak/√(2) σ)) . We combine the constant and Gaussian CDFs asCDF_c+g(t)=CDF_c(t) N_c/N_g+N_c + CDF_g(t) N_g/N_g+N_c .We then draw M values of y uniformly distributed between 0 and 1 and sort these values in ascending order. The corresponding arrival times of the events are finally obtained from CDF^-1_c+g(y).The top panel of Fig. <ref> shows these CDFs for typical exposure of 35ks with a non-flaring level of CR=0.1 count s^-1, which corresponds to those observed by XMM-Newton EPIC/pn, and a flare peaking at the exposure center with an amplitude of A_peak=0.2 count s^-1, which corresponds to the mean amplitude measured in the X-ray flares, thus leading to N_g=752counts. The corresponding constant and Gaussian light curve models are shown with the corresponding color in the bottom panel. The simulated arrival times are the black ticks at the top of the bottom panel of Fig. <ref> (only 1 arrival time in 20 are shown here for clarity purpose). The resulting simulated light curve binned on 100s is shown in the bottom panel of this figure. For illustration purpose, the Bayesian blocks computed for a false positive rate for the flare detection of 0.1% are also represented in this figure.
http://arxiv.org/abs/1704.08102v1
{ "authors": [ "Enmanuelle Mossoux", "Nicolas Grosso" ], "categories": [ "astro-ph.HE" ], "primary_category": "astro-ph.HE", "published": "20170426133257", "title": "Sixteen years of X-ray monitoring of Sagittarius A*: Evidence for a decay of the faint flaring rate from 2013 August, 13 months before a rise in the bright flaring rate" }
Fractional Multidimensional System Xiaogang Zhu and Junguo Lu Junguo Lu is with the School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, 200240 ChinaReceived; accepted ========================================================================================================================================================================§ ABSTRACTThe multidimensional (n-D) systems described by Roesser model are presented in this paper. These n-D systems consist of discrete systems and continuous fractional order systems with fractional order ν, 0<ν<1. The stability and Robust stability of such n-D systems are investigated.𝐊𝐞𝐲𝐰𝐨𝐫𝐝𝐬: n-D; fractional; stability; Robust § INTRODUCTION The multidimensional (n-D) systems have been studied for almost four decades <cit.>. It has been applied in fields such as image process <cit.>, n-D coding and decoding <cit.> and n-D filtering <cit.>. The n-D systems can represent dynamic processes that information propagates in many independent directions. However, the information of one dimensional systems only propagates in one direction.As for a multidimensional system which consists of fractional order differential equations, Galkowski et al. first presented such a system in 2005 <cit.>. But until now, researches on fractional n-D systems are either discrete system <cit.> or continuous system with different fractional order <cit.>. To the best of our knowledge, fractional n-D systems which consist of discrete system and fractional order system are not studied.This paper focus on a hybrid n-D system which consists of discrete system and continuous fractional order system. For a matrix X, X^*,X^T denote the transpose conjugate and transpose of matrix X, respectively. Sym(X) denotes X+X^*. I is the identity matrix with appropriate dimensions. For a matrix X, X>0 (X≥0) means positive definite (semi-definite) and X<0 (X≤0) means negative definite (semi-definite). The notation ℋ_n stands for the set of Hermitian matrices of dimension n. And ℋ_n^+⊂ℋ_n stands for the subset of positive definite matrices while ℋ_n^-⊂ℋ_n is the subset of negative definite matrices. Let the following notations be definedi=1k⊕M_i=i=1,...,kdiagM_iLet 𝕀(k) be𝕀(k)≜1,...,k, k∈ℕ§ PRELIMINARIESBased on Bochniak's model<cit.>, Bachelier<cit.> presented a hybrid version of Roesser model<cit.>, which combined integer order continuous system and discrete system. Here, we apply the Roesser model to a continuous-discrete fractional order system.[D^νx^1(t_1,...,t_r,j_r+1,...,j_k);⋮;D^νx^r(t_1,...,t_r,j_r+1,...,j_k); x^r+1(t_1,...,t_r,j_r+1+1,...,j_k);⋮; x^k(t_1,...,t_r,j_r+1,...,j_k+1) ] =[A_c A_cd; A_dcA_d ][ x^1(t_1,...,t_r,j_r+1,...,j_k);⋮; x^r(t_1,...,t_r,j_r+1,...,j_k); x^r+1(t_1,...,t_r,j_r+1,...,j_k);⋮; x^k(t_1,...,t_r,j_r+1,...,j_k) ]+[ B_1; B_2 ]u(t_1,...,t_r,j_r+1,...,j_k)y(t_1,...,t_r,j_r+1,...,j_k) =[ C_1 C_2 ][ x^1(t_1,...,t_r,j_r+1,...,j_k);⋮; x^r(t_1,...,t_r,j_r+1,...,j_k); x^r+1(t_1,...,t_r,j_r+1,...,j_k);⋮; x^k(t_1,...,t_r,j_r+1,...,j_k) ]+Du(t_1,...,t_r,j_r+1,...,j_k)where ∑_i=1^kn_i=n and 0<ν≤1.The vectors x^i(t_1,...,t_r,j_r+1,...,j_k), u(t_1,...,t_r,j_r+1,...,j_k) and y(t_1,...,t_r,j_r+1,...,j_k) are the local state subvectors, the input vector and the output vector, respectively. The matricesA=[A_c A_cd; A_dcA_d ]∈ℝ^n× n, B=[ B_1; B_2 ]∈ℝ^n× p,C=[ C_1 C_2 ]∈ℝ^l× n, D∈ℝ^l× pare the state, control, observation and transfer matrices respectively.By applying the Laplace transform and the Z-transform of system (<ref>), the following can be obtainedY(ρ)=T(ρ)U(ρ)whereT(ρ)=C(H(ρ)-A)^-1B+D, H(ρ)≜i=1k⊕ρ_iI_n_i Then, Δ(ρ,A) is defined byΔ(ρ,A)≜(H(ρ)-A) In this paper, we use the Caputo's fractional derivative, of which the Laplace transform allows utilization of initial values. The Caputo's fractional derivative is defined as <cit.> _aD_t^αf(t)=1/Γ(α-n)∫_a^tf^(n)(τ)dτ/(t-τ)^α+1-nwhere n is an integer satisfying 0≤ n-1<α<n; Γ(·) is the Gamma function which is defined as Γ(z)=∫_0^∞e^-tt^z-1dt The following lemmas are useful for presenting our results.<cit.>For given matrices 𝒰∈ℂ^n× m,Φ=Φ^∗∈ℂ^n× n, the following two statements are equivalent * 𝒰Φ𝒰^*<0* There exists a matrix X, such that Φ+𝒩_uX𝒩_u^*<0where 𝒩_u is the orthogonal complement of 𝒰. <cit.>For given matrices 𝒰∈ℂ^n× m,𝒱∈ℂ^k× n,Φ=Φ^∗∈ℂ^n× n, then the following two statements are equivalent * There exists matrix 𝒳∈ℂ^m× k such that Sym{𝒰X𝒱}+Φ<0 holds* 𝒩_uΦ𝒩_u^*<0 and 𝒩_v^*Φ𝒩_v<0 holdwhere 𝒩_u,𝒩_v^* are the orthogonal complement of 𝒰,𝒱^*, respectively. <cit.>Let F(s) be the Laplace transform of the function f(t), then for any ν>0,lim_t→∞D^νf(t)=lim_t→0s^ν+1F(s), Re(s)>0<cit.>A multidimensional discrete system[ x^1(j_1+1,...,j_k);⋮; x^k(j_1,...,j_k+1) ]=A[ x^1(j_1,...,j_k);⋮; x^k(j_1,...,j_k) ]is asymptotically stable if and only if(H(z)-A)≠0,∀ z∈𝒰_zwhere H(z) is defined as in (<ref>) and 𝒰_z={z=[ z_1,...,z_k]^T:|z_i|≥1,i=1,...,k}. § MAIN RESULTS §.§ StabilityA multidimensional continuous fractional order system with order 0<ν≤1 and (A)≠0 [ D^νx^1(t_1,...,t_r); ⋮; D^νx^r(t_1,...,t_r); ]=A[ x^1(t_1,...,t_r);⋮; x^r(t_1,...,t_r) ] is asymptotically stable if(H(λ)-A)≠0,∀λ∈𝒰_λwhere H(λ) is defined as in (<ref>) and 𝒰_λ={λ:λ=[λ_1,...,λ_r]^T, |(λ_i)| ≤π/2ν,0<ν≤1,i=1,...,r}.Applying Laplace transform to the multidimensional fractional system (<ref>), the following holds(H(s^ν)-A)[ X_1(s); X_2(s);⋮; X_r(s) ]=[ s^ν-1x_1(0); s^ν-1x_2(0); ⋮; s^ν-1x_r(0) ] It leads to(H(s^ν)-A)[ s^ν+1X_1(s); s^ν+1X_2(s); ⋮; s^ν+1X_r(s) ]=[ s^2νx_1(0); s^2νx_2(0);⋮; s^2νx_r(0) ] ∀λ∈𝒰_λ, (H(λ)-A)≠0, thus (H(s^ν)-A)≠0 when Re(s)>0. It means that (<ref>) has the only solution[ s^ν+1X_1(s); s^ν+1X_2(s); ⋮; s^ν+1X_r(s) ]=(H(s^ν)-A)^-1[ s^2νx_1(0); s^2νx_2(0);⋮; s^2νx_r(0) ] Therefore, the following holdslim_s→0,Re(s)>0s^ν+1X_i(s)=0, i=1,2,...,rAccording to Lemma <ref>,lim_t→+∞D^νx^i(t)=lim_s→0,Re(s)>0 s^ν+1X_i(s)=0, i=1,...,r Therefore,lim_t→+∞x(t)=A^-1lim_t→+∞D^νx(t)=0 It implies that the system is asymptotically stable.This completes the proof.Consider a multidimensional system represented by (<ref>). Then, it's asymptotically stable ifΔ(ρ,A)≠0, ∀ρ∈𝒰_zλwhere Δ(·) is defined by (<ref>) and 𝒰_zλ is defined by𝒰_zλ={ρ=[ ρ_1; ⋮; ρ_k ]∈ℂ^k : |(ρ_i)|≤π/2ν,0<ν≤1,i=1,...,r; |ρ_i|≥1,i=r+1,...,k} It can be proved by straightforward combinations of Lemma <ref> and <ref>.§.§ Point-clustering To proceed, consider the following matricesP_i=[ p_i_11 p_i_12; p_i_12^* p_i_22 ]∈ℂ^2× 2,Q_i=[ q_i_11 q_i_12; q_i_12^* q_i_22 ]∈ℂ^2× 2 Define the sets 𝒟_i as𝒟_i≜ { s∈ℂ: ℱ_P_i(s)≥0,ℱ_Q_i(s)≥0,∀ i∈𝕀(k) }where the functions ℱ_X_i(s) are defined byℱ_X_i(s)≜[ sI;I ]^*X_i[ sI;I ], ∀ i∈𝕀(k)We limit our consideration to sets described by 𝒟_i. Define the "k-region" 𝒟 as𝒟≜𝒟_1×𝒟_2× ... ×𝒟_k Let 𝒟 represent 𝒰_zλ, then the matrices P_i and Q_i areP_i=[0 sin(π/2ν)-jcos(π/2ν); sin(π/2ν)+jcos(π/2ν)0 ]Q_i=[0 sin(π/2ν)+jcos(π/2ν); sin(π/2ν)-jcos(π/2ν)0 ]where 0<ν≤1 and ∀ i∈𝕀(r).AndP_i=[10;0 -1 ]Q_i=0_2where ∀ i∈{r+1,...,k}.The following gives a sufficient condition for the stability of system (<ref>). The system (<ref>) is asymptotically stable if there exist matrices U_n_i,V_n_i∈ℋ^+_n_i and a matrix J=J^* such thatZ=G+[I -A ]^*J[I -A ]<0whereG≜[ i=1k⊕(U_ip_i_11+V_iq_i_11) i=1k⊕(U_ip_i_12+V_iq_i_12); i=1k⊕(U_ip_i_12^*+V_iq_i_12^*) i=1k⊕(U_ip_i_22+V_iq_i_22) ]and p_i_11,p_i_12,p_i_22,q_i_11,q_i_12,q_i_22 are defined in (<ref>) with (<ref>) and (<ref>).We'll prove that for ∀ρ that satisfies (i=1k⊕ρ_i I_n_i-A)=0, then ρ∉𝒟.Let A(ρ)=i=1k⊕ρ_i I_n_i-A. If (A(ρ))=0, then there exists a nonzero vector y∈ℂ^n such thatA(ρ)y=0 Lety=[ y_1; ⋮; y_k ]where y_i∈ℂ^n_i.And letx=[ (i=1k⊕ρ_i I_n_i)y; y ] From (<ref>), we getx^*Zx<0which leads to∑_i=1^k(ℱ_P_i(ρ_i)y^*_iU_iy_i+ℱ_Q_i(ρ_i)y^*_iV_iy_i) +(A(ρ)y)^*J(A(ρ)y)<0 According to (<ref>), the second term of (<ref>) is zero. If ρ∈𝒟, then according to (<ref>) the first term of (<ref>) is positive or zero. Therefore, when ρ∈𝒟, (A(ρ))≠0. Due to Theorem <ref>, the system (<ref>) is stable.The system (<ref>) is asymptotically stable if there exist matrices U_n_i,V_n_i∈ℋ^+_n_i and a matrix J=J^* such thatG+[I; -A ]J[I; -A ]^*<0where G is defined as in (<ref>).Similar to the proof of Therem <ref>, if inequality (<ref>) holds, then(i=1k⊕ρ_i I_n_i-A^T)≠0,ρ∈𝒟which is equivalent to(i=1k⊕ρ_i I_n_i-A)≠0,ρ∈𝒟Therefore, the system is asymptotically stable due to Theorem <ref>.The system (<ref>) is asymptotically stable if there exist matrices U_n_i,V_n_i∈ℋ^+_n_i and a matrix R such thatG+Sym{[I; -A ]R[ I I ]}<0where G is defined as in (<ref>).LetG_11 ≜i=1k⊕(U_ip_i_11+V_iq_i_11)G_12 ≜i=1k⊕(U_ip_i_12+V_iq_i_12)G_22 ≜i=1k⊕(U_ip_i_22+V_iq_i_22) Then,[I -I ]G[I -I ]^*=G_11-G_12^*-G_12+G_22= [ -2( i=1r⊕(U_isin(π/2ν)+ V_isin(π/2ν)))0;0 i=r+1k⊕0_n_i ]<0 It's obvious that[I -I ][ I; I ]=0 Therefore, according to Lemma <ref> and inequality (<ref>) and (<ref>), the following inequality holds[ A I ]G[ A^T; I ]<0 According to Lemma <ref>, inequality (<ref>) is then equivalent toG+[I; -A ]J[I -A^T ]<0 Thus, according to Corollary <ref>, the system is asymptotically stable.This completes the proof.§ NUMERICAL EXAMPLES §.§ Example 1The following example presents a (1+1)D system of (<ref>), i.e. a system with one continuous independent variable and one discrete independent variable. The system is considered:ν=0.5A_c= [ -0.80;0 -1.2 ], A_cd=[ 0.5 0.3; 0.7 0.2 ]A_dc= [ 0.4 0.3; 0.8 0.9 ], A_d=[ -0.30;0 -0.6 ]A=[A_c A_cd; A_dcA_d ] Let a system be system (<ref>) with (<ref>). Applying Theorem <ref>, the variables can be calculated by the Matlab LMI toolbox. The solution isU_1= [ 146.840;0 146.84 ], U_2=[ 24.3 5.99; 5.991.9 ]V_1= [ 4.2414.68;14.68 194.82 ], V_2=[ 146.840;0 146.84 ] J= [-164.9-57.19 -210.42 -75.7;-57.19 -108.77 -154.38-62.71; -210.42 -154.38 -643.18 -137.73; -75.7-62.71 -137.73 -116.65 ] It means that the continuous-discrete (1+1)D system is stable. §.§ Example 2 The system is considered:ν=0.9A_c= [ -0.80.5;0.3 -1.2 ], A_cd=[ 0.5 0.6; 0.7 0.8 ]A_dc= [ 0.9 0.1; 0.2 0.1 ], A_d=[ -0.70;0 -0.2 ]A=[A_c A_cd; A_dcA_d ] Let a system be system (<ref>) with (<ref>). Applying Corollary <ref>, the variables can be calculated by the Matlab LMI toolbox. The solution isU_1= [ 35333.440;0 35333.44 ], U_2=[ 36674.544958.14;4958.14 7924.014 ]V_1= [ 32747.790;0 14331.51 ], V_2=[ 51948.27 29479.11; 29479.11 51948.27 ]R= [-9111.82-53.82 -29013.36 -830.36; 1813.13 -12214.89 -13744.63-6499.66;26731.07 4233.10 -30626.82 -11869.47; 2051.31 1540.31 2773.74-8884.96; ] It means that the continuous-discrete (1+1)D system is stable.§ CONCLUSION In this paper, the fractional continuous-discrete systems are presented, where the fractional order is 0<ν≤1. The stability and Robust stability of fractional continuous-discrete systems have been investigated. Invoking fractional final value theorem, the sufficient condition of stability of such systems is proved. Then, we prove the sufficient condition of Robust multidimensional interval system. Finally, examples are given to verify the theorems. unsrt
http://arxiv.org/abs/1704.08427v1
{ "authors": [ "Xiaogang Zhu", "Junguo Lu" ], "categories": [ "math.OC", "93C05" ], "primary_category": "math.OC", "published": "20170427040556", "title": "Fractional Multidimensional System" }
http://arxiv.org/abs/1704.08702v3
{ "authors": [ "Jan Kolodynski", "Jonatan Bohr Brask", "Marti Perarnau-Llobet", "Bogna Bylicka" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20170427180017", "title": "Adding dynamical generators in quantum master equations" }
1Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 [email protected] 2Steward Observatory, The University of Arizona, Tucson, AZ 85719 3Department of Astronomy, Yale University, 52 Hillhouse Avenue, New Haven, CT 06511 4University of Milan, Department of Physics, via Celoria 16, I-20133 Italy 5University of Vienna, Türkenschanzstrasse 17, 1880 Vienna, AustriaWe present a multiwavelength investigation of a region of a nearby giant molecular cloud that is distinguished by a minimal level of star formation activity.With our new 12(J=2-1) and 13(J=2-1) observations of a remote region within the middle of the California molecular cloud, we aim to investigate the relationship between filaments, cores, and a molecular outflow in a relatively pristine environment.An extinction map of the region from Herschel Space Observatory observations reveals the presence of two 2-pc-long filaments radiating from a high-extinction clump.Using the 13 observations, we show that the filaments have coherent velocity gradients and that their mass-per-unit-lengths may exceed the critical value above which filaments are gravitationally unstable.The region exhibits structure with eight cores, at least one of which is a starless, prestellar core.We identify a low-velocity, low-mass molecular outflow that may be driven by a flat spectrum protostar.The outflow does not appear to be responsible for driving the turbulence in the core with which it is associated, nor does it provide significant support against gravitational collapse. § INTRODUCTIONObservations show that the internal structures of giant molecular clouds (GMCs) are permeated with complex substructure, including networks of filaments on a wide range of spatial scales , molecular outflows, and newly forming stars.In optical, submillimeter, and far-infrared images, filaments are frequently seen radiating from compact groups of young stellar objects (YSOs) and prestellar cores (e.g., Myers 2009; André et al. 2010; Ward-Thompson et al. 2010; André et al. 2014).Millimeter observations have revealed that YSOs, in turn, are frequently associated with molecular outflows, which may drive a significant portion of turbulence in GMCs (e.g.,Bally et al. 1996; Reipurth & Bally 2001; Maury et al. 2009). The ubiquity of such features suggests that outflows and filaments play critical roles in star formation (e.g., Margulis et al. 1988; Maury et al. 2009; Arzoumanian 2011; Hacar et al. 2016).In particular, molecular line studies of nearby filamentary structure demonstrate that prestellar core formation depends on cloud kinematics and chemistry (e.g., Duarte-Cabral et al. 2010; Hacar & Tafalla 2011; Arzoumanian et al. 2013; Kirk et al. 2013).In this paper, we present results on the California molecular cloud (CMC), an excellent laboratory for studying the influence of environment, structure, and cloud evolution on the initial conditions of star formation.At a distance of 450± 23 pc (Lada et al. 2009, Lombardi et al. 2010), the CMC has a similar mass, size, and morphology to the Orion A molecular cloud (OMC).However, the CMC has a much lower level of star formation activity than does the OMC, with roughly an order of magnitude fewer YSOs (Lada et al. 2009; Lada et al. 2010).The most obvious area of star formation within the CMC is the young embedded cluster associated with the reflection nebula NGC 1579 (Andrews & Wolk 2008), located at the southeastern edge of the cloud.The cluster contains the B star Lk Hα 101, the most massive star known in the CMC.In dust extinction maps, one finds that the majority of the YSOs in the cloud are associated with regions of highest extinction (Lada et al. 2009; Harvey et al. 2013).Most observed YSOs, in fact, are projected along a slender filamentary ridge in the south of the cloud (Lada et al. 2009; Harvey et al. 2013).As one proceeds westward in the CMC, one encounters regions of fewer YSOs, sometimes only single objects, and regions with no apparent star formation activity at all.Thus, not only is the CMC notable for its low level of star formation activity compared to more active GMCs in the Solar vicinity, but within the cloud itself there appears to be a gradient in the star formation rate (SFR) that is worthy of examination.The focus of this study is on a small region in the middle of the CMC, near (l,b)=(1624, -88), for which we present new 12(J=2-1) and 13(J=2-1) observations taken at the Arizona Radio Observatory.These observations were motivated in part by a noteworthy feature previously observed in an extinction map: two filaments radiating from a high-density structure with two embedded massive cores.As we will show, the high-density structure from which the filaments radiate also contains an outflow of molecular gas that is most evident in 12 emission.The filaments themselves are most prominent in dust extinction.Altogether, the structure as observed in extinction resembles an “X,” and we will henceforth refer to it as California-X, or Cal-X.Near the outflow, infrared observations reveal the presence of two bright sources, one of which is a YSO candidate that could be driving the outflow.While a number of studies have considered filaments feeding into clusters (e.g., Myers 2009; Schneider, N. et al. 2012; André et al. 2016), there has been less of a focus on the role of filamentary structure in the formation of individual stars, as we observe in this interesting structure within the CMC.All in all, Cal-X provides an ideal opportunity for investigating incipient star formation in a relatively pristine environment.In particular, our observations enable us to examine the role of filamentary structure and molecular outflows in an early stage of stellar evolution.The main goals of this paper are to characterize the physical properties of the various components of Cal-X—including the filaments, cores, and outflow—and to determine the kinematic relationship of these components with each other and with the YSO.In Section <ref>, we describe the 12(2-1),13(2-1), and far-infrared Herschel Space Observatory observations used for this study.We also describe the properties of the YSO observed in this region.Our methods and results on the physical properties of Cal-X are presented in Section <ref>.We discuss the implications of our results for the evolution of star formation in the CMC in Section <ref>, and we summarize our conclusions in Section <ref>.§ OBSERVATIONS §.§ CO dataObservations of the CMC were made with the Heinrich Hertz Submillimeter Telescope (HHT), a 10 meter submillimeter facility located at an elevation of 3200 m on Mount Graham, Arizona. The J = 2-1 lines of ^12 C^16 O (230.538 GHz; hereafter 12) and ^13 C^16 O (220.399 GHz; hereafter 13) were mapped over a 1 28 × 0 48 field centered at (l,b) = (162, -8.85) using a prototype ALMA Band 6 sideband separating receiver with dual orthogonal linear polarizations. 12 and 13 were simultaneously observed in the upper and lower sidebands, respectively. We use a 128 channel filterbank spectrometer with 0.25 MHz (0.34 km s^-1)wide channels to collect data overa range in V_ LSR of -20 to 16  for both lines. The typical single sideband system temperature for these observations is about 200 K.Observations were conducted in November and December in 2012 as part of a larger HHT CO mapping survey of the CMC (e.g, Kong et al. 2015). The field was broken into 23 10× 10 tiles on a 3×8 grid. Each tile is generated “on-the-fly” by raster scanning each tile field at a rate of 10/ sec along the galactic longitude with 0.1 second sampling (smoothed with a 0.4 second window in post-processing). Each row in the raster was offset by 10 in galactic latitude. This sampling gives a native beam size of 34–36, depending on the frequency. The data were calibrated using a standard chopper-wheel technique (Kutner & Ulich 1981) to establish a temperature scale in antenna temperature, T_A^*, which was then correctedto a main-beam temperature scale, T_ mb, using observations of the standard source W3(OH) (for details, see Bieging & Peters 2011). The individual calibrated polarization maps are combined using the inverse-variance-weighted average and are stitched into a final map with overlapping regions weighted by their variances. The final 12 and 13 maps were convolved to a common beam FWHM size of 38. The final maps have a per pixel rms noise of 0.17 K/channel.§.§ Far-infrared dataFar-infrared observations of the CMC were taken with the Herschel Space Observatory in the Auriga-California program (Harvey et al. 2013).The observations were taken in the parallel mode of the PACS and SPIRE instruments (Griffin et al. 2010).Additional details about the observing strategy may be found in Harvey et al. (2013) and André et al. (2010).After the methods of Lombardi et al. (2014), the Herschel data products were pre-processed with the most recent version of the calibration files, using the Herschel Interactive Processing Environment (HIPE; Ott 2010). A dust extinction map of the CMC map was created according the methods described in Lombardi et al. (2014).To summarize, the dust emission, which is optically thin at the observing frequencies of Herschel (λ=160–500 ), was modeled as a modified blackbody.All the Herschel data were first convolved to the 36 beam size of the SPIRE 500  map—comparable to the beam size of the CO data—and then the optical depth was determined by fitting the observed spectral energy distribution according to the modified blackbody model. § METHODS AND RESULTS §.§ Dust Extinction and CO Intensity MapsFigure <ref> displays the 12 and 13 intensity maps of the sub-region of the CMC, both integrated over the velocity range -8 to 5 .In the bottom panel of the figure is the dust extinction map of the region in units of visual extinction, , derived from Herschel data, which calls attention to the presence of two prominent filamentary structures extending below approximately -8 8.One can see that traces of the filamentary structures are apparent in the 13 map but are scarcely discernible in the 12 map.However, in both the 12 and 13 channel maps, discussed below, the filamentary structures are more pronounced.Nevertheless, dust is the more reliable tracer overall, since 12 tends to be optically thick at high column densities, while locally, 13 can also be optically thick or suffer from fractionation (e.g., Lada et al. 1994). A molecular tracer of higher densities, such as C^18O, is likely to recover more of the filamentary structure than either 12 or 13 (e.g., Figure 5 of Alves et al. 1999).Finally, the dashed box in Figure <ref> draws attention to a notable feature in the north, centered at (l,b)≈ (162 45, -8 7).This feature is prominent in each map and particularly so in the 12 map.In Section <ref>, we show that this feature is an outflow. In Figures <ref> and <ref>, we show channel maps of the 12 and 13 emission, from -2.5 to 3.5 , in bins of 1 .In these maps and the following, we now restrict our attention to the X-shaped feature, Cal-X, located at l>162 1, containing two filament-like structures in the south and the outflow in the north.In panels (c) and (d) of the 12 channel maps, some of the filamentary structure becomes apparent in the velocity range -0.5 to 1.5 .The outflow becomes distinct from the rest of the emission at velocities greater than ∼ -0.5.The 13 channel maps indicate that the two filamentary structures located at b<-8 8 have most of their emission at distinct velocities.The emission from the west filament, located between ∼ 162 20 < l <162 33, is concentrated within the velocity range of -1.5 to +0.5 .For the east filament, located between ∼ 162 48 < l <162 65, most of the emitting material is moving at velocities -0.5 to +1.5 .To obtain a more detailed understanding of the filament kinematics, we analyze their spectra in <ref> below. In the rest of this paper, our aim is to characterize the physical properties of the Cal-X substructures and determine how they might be related.For convenience, we divide the region in the extinction map into four axes, as shown in Figure <ref>:northwest (NW; positions 1–5), northeast (NE; positions 32–26), southeast (SE-fil; positions 6–18), and southwest (SW; positions 19–31). §.§ CO SpectraWe characterized the kinematic properties of the substructures by analyzing the 12 and 13 spectra at each location labeled in Figure <ref>.The average 12 and 13 spectra toward the entire region are displayed in the top right corner of Figure <ref>, and the 12 and 13 spectra corresponding to each location along the filaments are displayed in Figure <ref>.Although the 12 spectra appear structured with typically at least two peaks, the 13 spectra exhibit single peaks and are clearly more narrow than the 12 spectra. We fitted a Gaussian to each individual 13 spectrum in Figure <ref> using the MPFITFUN least-squares fitting procedure from the Markwardt IDL Library (Markwardt 2009).The three parameters determined from the Gaussian fit at each location—the peak main-beam brightness temperature, , central velocity, v_0, and velocity dispersion, σ—are summarized in Table <ref>.The systematic 13 and 12 velocities of the entire Cal-X region as derived from the averagespectra toward the region are -2.6 and -2.3 , respectively.The average 13 and 12 linewidths are 1.9 and 3.5 , respectively.We note that the Gaussian fitting of the 13 spectra result in typical uncertainties on the velocity measurements, ± 0.01, that are less than the channel width of the data (0.34 ).The results of the Gaussian fitting are presented in Figure <ref>. The colors and sizes of the circles represent the line-of-sight velocity and linewidth, respectively.Dispersions are fairly narrow and uniform along the filaments but broaden at their apex, near position 31.The axes of Cal-X also converge dynamically near position 31 to a velocity of approximately -0.35, and each axis displays the presence of a velocity gradient.We note, in particular, the velocity gradients along the lengths of the filamentary structures, which we denote as SE-fil and SW-fil.These gradients could plausibly be due to accelerating gas flows along the filaments, in particular, either an inflow or an outflow of gas, depending on the relative orientation of the filaments along our line of sight (see <ref>).The NE axis associated with the outflow is slightly blueshifted with respect to the other three substructures.As Figure <ref> shows, all four of the axes of Cal-X are redshifted with respect to the 13 systematic velocity of -2.6 of the entire region, indicated with a vertical dashed line, which is dominated by relatively low-density material.Assuming a gas-to-dust ratio of /=1.87× 10^21  mag^-1 (Bohlin et al. 1978), the total mass of material in the region having ≥ 1 mag is 2220 , while the total mass of material with ≥ 4 mag (the threshold extinction we use to define the extents of the filaments; see Section <ref> below) is 950 .The 12 line profiles shown in Figure <ref> are complex, suggesting multiple substructures that are moving at different velocities along the line-of-sight at many of the positions. The profiles of the rarer 13 isotope are generally simple, single-peak spectra that are well-characterized by single Gaussians.Exceptions include positions 30–34, which indicate the presence of a second component and the possibility that 13 may be optically thick here. A number of the 12 profiles have dips at velocities where there is a peak in 13 emission—for instance, positions 4, 8, 17, 20, 24, 32, and 34—signaling that the 12 emission is optically thick and self-reversed.In Figure <ref>, we present the parameters derived from the Gaussian fits to the 13 spectra as a function of distance along each axis of Cal-X.For convenience, we calculate the distance from the nexus of the four axes, which we define to be the center of positions 5, 6, 31, and 32, located at (l,b)=(162 406,-8736).We note that, in the Dobashi et al. (2005) dark cloud catalog, this is near the location of cloud 1096 (see Table 7 in Dobashi et al. 2005).In the first panel of Figure <ref>, we also show  as a function of distance.The 13 brightness temperature, , varies between 1–3 K and 1–2.5 K for the SE and SW filaments, respectively.The ratio of the peak brightness temperatures, T_12/T_13, varies between ∼ 0.5 and 3 K, substantially smaller than the fractional abundance on Earth of ∼ 89 (Wilson & Matteucci 1992), which demonstrates that the 12(2-1) is optically thick in the line center.The fourth panel in Figure <ref> confirms the presence of a velocity gradient along each of the four axes that we see in Figure <ref>.In particular, the magnitudes of the velocity gradients along the filaments are roughly 0.1 pc^-1 (SE-fil) and 0.2 pc^-1 (SW-fil).The gradients are roughly linear, but contain some departures from linearity that could be due to random motions or, as is the case with the NE axis, the presence of an outflow near the position where the 13 spectrum was taken. The last panel in Figure <ref> shows that the velocity dispersions along the four axes are highest close to the nexus point, near the outflow, and then decrease with distance from the nexus.§.§ Physical Properties of Filaments and CoresDust is the preferred tracer of total gas column density (e.g., Bohlin et al. 1978; Goodman et al. 2009), so to estimate the two filament masses and other related properties, we use the Herschel extinction map.We begin by assuming that the filaments include material having extinction of ≥ 4 mag.We found that setting the extinction boundary at ≥ 3.5 mag includes too much extraneous material beyond the filaments.On the other hand, in setting the boundary at ≥ 4.5 mag, the continuity of the filamentary structure is lost. The extinction map covers much more area than does the 13 map we use to measure the kinematic properties of the filaments.Thus, we define the southernmost tip of SE-fil at (l,b)=(16258,-914), below position 18; the southernmost tip of SW-fil is located at (l,b)=(16232,-916). We define the northernmost parts of SE-fil and SW-fil at positions 9 and 28, respectively.It is likely that the filaments extend even further northward—i.e., into the high-column-density region containing the NE and NW axes—but beyond this,we have no way of separating them out from the high-column-density material into which they blend. In Table <ref> we list the sizes, masses, and number densities of the filaments.Since the filaments may extend further north than we defined, our estimated lengths are possibly lower limits. We approximate the widths by taking the average of a number of measurements along the spines of the filaments.Column densities are calculated assuming the standard Galactic dust-to-gas ratio/ = 1.87× 10^21   mag^-1,of Bohlin et al. (1978), where = N()+2 is the total hydrogen column density. We then calculate the mass usingM=μ m_ H S,where μ=1.36 is the helium correction factor, m_ H is the hydrogen mass, and S is the surface area occupied by material having ≥ 4 mag.The resulting masses of SE-fil and SW-fil are roughly 126 and 154 , respectively.Table <ref> also lists the estimated masses per unit length, which we discuss in <ref>.Under the hypothesis that the filaments are funneling gas to the high-density region in the north, we also estimate flow velocities and mass flow rates, Ṁ, also to be discussed in <ref>.All errors in Table <ref> stem from the propagation of uncertainties in the Herschel map and from the uncertainty in the distance to the California GMC, d=450± 23 pc.We also include an uncertainty in the gas-to-dust ratio of 10%.The dust map in Figure <ref> reveals the presence of several compact, high-extinction subregions embedded throughout Cal-X.We aim to identify cores, keeping two objectives in mind.First, we want to select peaks in the extinction map that correspond to potentially star-forming material.Here we draw on the argument that cloud material above the extinction of ≈ 7.3 mag is most directly related to star formation activity (Lada et al. 2010).This is similar to the threshold for core formation discussed by André et al. (2010).Second, we want to differentiate neighboring peaks from each other and from the background material.With these criteria,we identify six cores, defined as material enclosed within bounded contours of ≥ 7 mag, embedded in SE-fil and SW-fil. Each of these cores contains a local maximum, i.e., an isolated peak in emission at a pixel in the map that is not connected to any other pixels at higher contour levels.Since the cores in the northern region near the outflow are embedded in gas that has a much higher average extinction than the filaments, our criteria for defining their boundaries are somewhat stricter.In this case, we identify two cores, delineating their boundaries at =12 mag.At lower extinctions, the two cores start to blend together, but at 12 mag, the cores appear as distinct local peaks in the extinction map.Contour maps of the total of eight cores we define in this way are shown in Figure <ref>.The selection criteria are somewhat subjective, but they successfully isolate peaks in the extinction map and distinguish neighboring cores from one another.Cores E and F in SW-fil are an exception, in that they appear blended together into one structure at 7 mag.However, due to the prominent “saddle” in between these two local peaks in extinction, we decide to treat them as distinct objects and measure their properties separately. We compared our results with those of the core extraction program CLUMPFIND (Williams et al. 1994), an automated routine for analyzing hierarchical structure in observations of molecular clouds.We used the version of the program for two-dimensional data sets.The algorithm searches for peaks in emission located within a user-defined lowest level contour, and then follows the peaks until all the emission above the defined threshold can be assigned to specific clumps (Williams et al. 1994).We applied CLUMPFIND to the Herschel dust map with a range of lowest level contours as inputs, and found that at 7 and 12 mag levels, the algorithm extracts the same eight cores that we identify above. We estimate the effective radius of each core, R_ eff=√(A_c/π), from its projected area, A_c, corresponding to continuous pixels having extinctions in excess of 7 or 12 mag. In calculating the core masses, to correct for the fact that they are embedded within regions of high extinction, we perform a background subtraction as follows.For a core having a projected area, A_c, we calculate the mass within identical background areas having an extinction of 4 mag, M_ back.For each core, we select at least two regions in the dust map that are near the core without having any overlapping emission with it or any other cores.We define the average mass of these background areas as M_ back. Next, we subtract M_ back from the uncorrected mass contained within the core before subtraction, M_ uncor.That is, we estimate the core mass as M = M_ uncor - M_ back. Similarly, in estimating the number density of a core, we subtract a contribution from its mean column density equivalent to = 4 mag.In Table <ref> we list the derived core sizes (ranging from 0.11 to 0.28 pc), the corrected masses (from ∼ 4 to 88 ), the uncorrected masses (listed in parentheses), and number densities (from ∼ 1.2× 10^4 to 4.4× 10^4 cm^-3).We note that the choice of contouring level effects the estimated masses, sizes, and other properties of the cores, such as the virial parameter, which we examine in <ref>.We performed additional calculations of the core masses and sizes at =7± 1 mag and 12± 1 mag to get a sense of how much the estimates differ.The average core mass at 6 mag is ∼ 1.3 times greater than the estimated average at 7 mag, which in turn is ∼ 1.4 times the average at 8 mag.The average core size at 6 mag is 1.6 times larger than the average size at 7 mag, which is 1.3 times the average size at 8 mag.In <ref> we will discuss how the choice in contour level affects estimates of the virial parameter.§.§ Infrared source propertiesThe two sources associated with core A, (A1 and A2 in Figure <ref>), were previously identified as YSOs by Harvey et al. (2013) and Broekhoven-Fiene et al. (2014). Broekhoven-Fiene et al. (2014) performed SED modeling on Spitzer observations of the CMC to determine the YSO classes and physical properties, which are summarized in Harvey et al. (2013). Sources A1 and A2, corresponding to source 13 and 12 in Harvey et al. (2013),are located at (l,b)=(162 47, -8 68) and (162 46, -8 68), respectively.Broekhoven-Fiene et al. (2014) classified source A1 as a class I YSO and determined its bolometric luminosity to be 0.56 L_⊙.The authors classified A2 as a flat spectrum source with a luminosity of 3.40 L_⊙.Source A2 appears prominently in the near-infrared image in Figure <ref>; in the plane of the sky, it is seen to be projected close to the peak of the blueshifted lobe of the outflow.Source A1, too faint to be visible in the image, is projected close to the peak of the redshifted lobe of the outflow. §.§ OutflowInspection of the 12 spectra in the northwest region of Cal-X reveals an excess of emission at high velocities compared to other regions.In particular, the 12 spectra in Figure <ref> corresponding to positions 32–36 have surpluses of blue- and/or redshifted wing emission that suggest a molecular outflow.However, there are other possible causes for this excess emission that must be considered before conclusively identifying the source as an outflow.For instance, the emission could be due to small background or foreground clouds along the line of sight, or it may arise from gas clumps within the CMC that are moving at speeds markedly different from the average cloud velocity.Because the presence of excess wing emission is not by itself sufficient to identify an outflow, we follow Margulis et al. (1988) in adopting more selective criteria, which are aided by the contour map and spectrum displayed in Figure <ref>. First, our candidate outflow exhibits bipolarity.Second, the shape and profile of its average 12 spectrum departs significantly from the average spectrum toward Cal-X as a whole, in which most subregions do not display excess emission.Quantitatively, the average 12 linewidth of Cal-X is 3.5 , whereas the average spectrum in the vicinity of the outflow exhibits velocities much greater than (3.5 )/2=1.75  from the line center.Third, two YSOs are close to the center of the bipolar outflow candidate and may be its driving source.These three criteria are summarized in Figure <ref>, which shows a near-infrared image of the field containing the sources, createdfrom JHK observations taken at the Calar Alto Observatory in Spain.The image, kindly provided by CarlosRomán-Zúñiga, is overlaid with a contour map of the blue- and redshifted lobes of the outflow, as well as the average 12 spectrum toward the region in the inset.Only the brighter of the two YSOs (A2 in Figure <ref>) is apparent in the near-infrared image.To derive the masses of the outflow lobes using the 12 emission, we first assess the 12 optical depth. Analysis of the line ratios in <ref> indicates that the 12 is likely to be optically thick toward the line center.We estimate the average ratio of line intensities, T_12/T_13, toward the outflows as a function of velocity.In the wings, where 13 emission becomes undetectable, we calculate upper limits to the ratio, by setting the minimum values of T_13 to the 1-σ rms noise level, 0.1 K.We find that the maximum value of T_12/T_13 toward the outflow is ∼ 7, indicating a moderate opacity for the 12 line.Nevertheless, we estimate the masses of the outflow lobes according to the following method, under the assumption that the 12 line is optically thin.Thus, we are likely to underestimate the optical depth, and therefore the mass.For each velocity channel in the blue- or redshifted lobe, we calculate the 12 optical depth, τ_12, using τ_12(v) = -ln[ 1 - (12)/J() - J(T_ bg)],where J(T) = hν/k/exp(hν/kT)-1 is the radiation temperature (Rohlfs & Wilson 2004), and (12), the 12 main-beam temperature, depends on velocity. Again, these calculations assume that the 12 is optically thin.Next, we assume local thermodynamic equilibrium (LTE) conditions to estimate the 12 column density, N_12, in each channel usingN_12 =2.39 × 10^14 ( + 0.93) ×∫τ_12(v)/1 - exp(-11.06/) dv ,whereis the excitation temperature, and velocities over which we integrate are in units of  (e.g., Scoville et al. 1986).For the blueshifted (redshifted) lobe, we integrate over the velocity range -10 to -4  (1 to 7 ). Finally, we assume an abundance ratio of [12]/[]=10^-4 to calculate the total mass per channel,M_ flow =μ m_H_2N_12 A_ flow([CO]/[])^-1where A_ flow is the area of the outflow, m_H_2 is the mass of an  molecule, and μ=1.36 is the helium correction factor.The resulting masses, calculated for three different values of the unknown excitation temperature (=10,20,30 K), are listed in Table <ref>.For = 10 K, the masses for the blue- and redshifted lobes of the outflow are ∼ 0.08 and 0.07 , respectively. Because the 12 may not be optically thin, the outflow masses we determine above are likely to be lower limits. It would be preferable to use 13 to determine the masses, since the molecule is more likely to be optically thin.However, while we did not detect 13 toward the outflow, we noted earlier that the maximum value of T_12/T_13 toward the outflow is ∼ 7.Thus, if we assume that T_13=T_12/7 everywhere in the outflow, we can use analogous forms of equations (<ref>) and (<ref>) to calculate τ_13(v) and N_13 and determine upper limits to the masses.Estimated in this way, replacing N_12 with N_13 in equation (<ref>), and assuming an abundance ratio of [13]/[]=2× 10^-6 (e.g., Dickman 1978; Frerking et al. 1982), the masses of the outflow lobes are up to ∼ 7 times higher than the estimates based on the assumption of optically thin 12.§ DISCUSSION §.§ Filaments and coresThe continuous velocity gradients observed along the two Cal-X filaments in Figure <ref> might be explained by one of a number of different processes.The main possibilities are gas inflow (collapse), outflow (expansion), or rotation.Differential rotation seems unlikely, since it would rip apart each filament on the order of a crossing time.Taking an average 3D velocity dispersion of a filament, σ_3 D=√(3)σ≈√(3)(0.49 ) = 0.84 , and a width of ∼ 0.3 pc, we get a crossing time of t_ cross≈ 0.36 Myr. If SE-fil and SW-fil are at least as old as the YSOs in the region, ∼ 1–3 Myr, they are probably gravitationally bound or pressure confined. Since there is no expanding source, such as an HII region or molecular outflow, at the nexus of the two filaments, the outflow scenario also seems unlikely.The molecular outflow associated with the YSOs is located northward of the filaments and does not appear to be spatially or dynamically related to them.We therefore suggest that the smooth velocity structure is best explained by a global infall of gas toward the center of the system.Considering that the dominant gravitational potential well in Cal-X is the high-extinction clump containing the massive cores A and B, gravity could readily explain the velocity gradients, if all the gas is falling toward that region.In this scenario, the redshifted filament SE-fil is in the foreground, relative to the center of the system, while the blueshifted SW-fil is in the background.To further support the claim of infall, we can use conservation of energy arguments to check whether the observed velocities along the filaments are consistent with gravitational infall toward the massive clump.A test mass starting from rest at infinity and falling toward a clump of mass M and radius R will reach a speed of v=(2GM/R)^1/2 by the time it arrives at the surface of the clump.In convenient units, v= 0.93  (M/100 )^1/2(R/pc)^-1/2.We calculate this expected v for a range of total clump masses and radii above a certain  threshold.For 4≤≤ 13 mag, 9342≳ M ≳ 963, 5≳ R≳ 1 pc, and 4≳ v≳ 3 .Thus, the hypothesis of gravitational infall is plausible since these velocities are much larger than the observed velocities along the filaments (see Figure <ref>).It is likely that a confluence of other physical effects—including external pressures on the filaments, turbulence, and the fact that the filaments are not point masses but extended structures—are working together to oppose gravitational attraction toward the clump, thus slowing down the infall speeds.The estimates of the mass-per-unit-lengths of SE-fil and SW-fil are, respectively, M/L=54.6± 6.8 and 65.0± 8.4  ^-1.André et al. (2010) suggested that cores can form from filaments thathave attained the thermal critical mass-per-unit length, (M/L)_ crit = 2c_s^2/G (Ostriker 1964), beyond which filaments are gravitationally unstable.For a ∼ 7 K isothermal filament,[The typical observed 12 temperature along the two filaments is T_ obs≈ 3 K. The corrected temperature from the Rayleigh-Jeans approximation, T_ cor≈ 7 K, is derived by solving T_ obs=[J_ν(T_ cor)-J_ν(T_ cmb)](1- e^-τ_ν) for T_ cor, assuming τ_ν≫ 1.J_ν(T)=(hν/k)/( e^hν/kT-1 ), and T_ cmb=2.725 K is the cosmic microwave background temperature.]the corresponding sound speed is c_s = √(kT/μ m_ H_2) = 0.15, and (M/L)_ crit = 10.4  pc^-1.The observed values of SE-fil and SW-fil are higher by a factor of ∼ 5–6, suggesting that they are thermally supercritical.However, magnetic fields may be a source of support against gravitational collapse (Fiege & Pudritz 2000; Beuther et al. 2015; Kirk et al. 2015).In addition, turbulent motions may help stabilize the filaments against gravity.Following Peretto et al. (2014), we consider an effective critical mass-per-unit-length that depends on the average velocity dispersion along the filaments, (M/L)_ eff = 2σ̅^2/G.Taking σ̅= 0.4 (see Table <ref>), we calculate (M/L)_ eff=74  ^-1.Since the ratio of the effective to the observed M/L is ∼ 0.7–0.9, this suggests that the filaments are subcritical or, at best, marginally critical.This is equivalent to saying that the filaments are supported by a gas with an effective temperature defined by the velocity dispersion: μ m_ H_2σ̅^2/k=53 K.As it turns out, nevertheless, the extinction map of Cal-X reveals the presence of a number of overdense structures.In <ref>, we identified eight cores embedded in Cal-X, six of which are embedded within the filaments SE-fil and SW-fil.In theory, a core is considered to be prestellar if it is starless and self-gravitating (André et al. 2000; Di Francesco et al. 2007; Könvyes et al. 2015).Such prestellar cores are expected to develop into protostars.With the exception of core A, the cores we identified are starless.To assess whether they are also self-gravitating, we estimate the virial parameter, α, for each core according to (Bertoldi & McKee 1992)α_ vir≡M_ vir/M=5σ^2 R/GM,whereM_ vir and M are the virial and observed masses of the core, respectively.An object is self-gravitating if α_ vir≤ 2 (Bertoldi & McKee 1992).In this way, using the corrected masses of the cores, we identify core B, with α_ vir≈ 1.5, as the only self-gravitating core. The other cores have virial parameters in the range 3–7 and are therefore likely to be pressure confined.We note that if α_ vir is calculated using the uncorrected masses (i.e., not background-subtracted), we find that cores A and D are also marginally self-gravitating, with α_ vir≈ 1.9 for both.The virial analysis may not be valid for core A, however, since it may be affected by the outflow. As discussed in <ref>, the choice of contour level affects the estimated mass of cores.However, changing the extinction threshold (of 7 and 12 mag) by ± 1 mag results in only slight changes to the typical core mass and size, by factors less than 2.We made additional estimates of the virial parameter using these alternate values of the core mass and size and found that our results are robust.In their study of a filamentary system composed of three Spitzer dark clouds, Peretto et al. (2014) similarly conclude that the velocity gradients along the filaments are due to infalling gas toward the center of the system where two massive cores are situated.Peretto et al. (2014) decompose this system, called SDC13, into four filaments, with lengths ranging from 1.6 to 2.6 pc, and widths ranging from 0.2 to 0.3 pc, similar to the Cal-X filaments.The SDC13 filaments, however, are more massive (∼ 195-256) than SE-fil and SW-fil, and their mass-per-unit-lengths are likely critical or supercritical.Three of the SDC13 filaments are each embedded with four to fice cores each, whereas the Cal-X filaments have two and four cores.Perhaps the greater number of cores in the SDC13 filaments are a consequence of them having values of mass-per-unit-length that are closer to supercritical.§.§ Outflow energeticsProtostellar outflows, particularly in cluster-forming groups, are thought to play a critical role in driving molecular cloud turbulence (Bally et al. 1996; Reipurth & Bally 2001; Knee & Sandell 2000; Maury et al. 2009).How significant is the impact on its surroundings of one outflow driven by a single protostar?In order to investigate the outflow energetics and determine its possible relationship to turbulence in Cal-X, we estimate the outflow momentum flux and mechanical luminosity and compare them to the relevant quantities determined for core A, with which the outflow is spatially associated. To begin, we use the 12 mass estimates of the outflow provided in Table <ref> to calculate its momentum and kinetic energy,P_ flow = M_ flow v_ char, KE_ flow = 1/2M_ flow v_ char^2.We determine v_ char, the characteristic velocity of an outflow lobe, from the intensity-weighted mean absolute velocity summed over the appropriate velocity range,v_ char = v_ obs/cos i=1/N_ pixelscos i∑_ pixels∑(v) |v-v_0|/∑(v)where v_ obs is the observed velocity and v_0=-2.6 is the 13 systemic velocity of the entire region.In calculating equation (<ref>), we assume v_ char = v_ obs. But since we do not know the inclination angle i between the line of sight and the flow axis, our estimates of v_ char may be underestimated.For instance, if i≤ 60^∘, v_ char would be underestimated by as much as a factor of two.In this way, we calculate the characteristic velocities of the blueshifted and redshifted outflows to be 3.4 and 5.8 , respectively.These values, as well as P_ flow and KE_ flow are summarized in Table <ref>.The total radially directed momentum of the outflow is 0.67  , implying that by the time the outflow mass has reached a velocity of 1  from the line center, it will have swept up 0.67  of material.This is only about ∼ 0.8% of the mass of core A, suggesting that roughly 130 similar generations of outflows would be necessary to support core A against gravitational collapse.How likely is this scenario? Core A appears to be marginally self-gravitating.Based on its 13 velocity dispersion and mass, we estimate α_ vir≈ 2.4 (or, using the uncorrected mass, α_ vir≈ 1.9).To determine whether the force exerted by the outflow on the core can hinder its collapse, we estimate the momentum flux (or “force”) of the outflow and compare this to the force required to keep the core in hydrostatic equilibrium.To determine the momentum flux, F_ mom=P_ flow/t_ dyn, we estimate the dynamic timescale of the two outflow lobes as t_ dyn=l_ flow/v_ char, where l_ flow is the length of a given outflow lobe.We do not consider the inclination i of the outflow, but we note that t_ dyn depends on i via both v_ char and l_ flow=l/sin i, where l is the observed projected length on the sky.The projected lengths of the blue- and redshifted lobes are roughly 0.73 and 0.30 pc, respectively, corresponding to t_ dyn≈ 0.2 and 0.05 Myr. The resulting momentum flux of the blue- and redshifted lobes are 1.4 and 7.8   Myr^-1 (see Table <ref>).Based on the upper limit estimate of the outflow mass from 13 observations (see <ref>), the total momentum flux of the outflow may be up to ∼ 7 times higher.Following Maury et al. (2009), the combined force necessary to balance gravity at radius R in core A isF_ grav=4π R^2 P_ grav= GM^2/2R^2,where P_ grav=GM^2/8π R^4 is the hydrostatic pressure for a spherical shell of radius R encompassing a mass M=2σ^2 R/G. Thus, F_ grav≈ 187    Myr^-1.The total momentum flux of the outflow is afactor of ∼ 20 smaller than F_ grav, suggesting that the outflow is too weak to provide significant support against collapse in core A.Even if the outflow mass were at the estimated upper limit, the total momentum flux would still be less thanF_ grav by a factor of ∼ 3.To assess the possible influence of the outflow on turbulence in core A, we approximate the mechanical luminosity of the outflow, that is, the amount of kinetic energy that it delivers to the surrounding ISM, using L_ mech = KE_ flow/t_ dyn.Not taking into account inclination, the total mechanical luminosity of the outflow lobes is ∼ 4× 10^-3 L_⊙.The upper limit is ∼ 28× 10^-3 L_⊙. Compared to the 68 molecular outflows compiled by Lada (1985), who found that L_ mech scales with the luminosity of the driving sources, the outflow and YSOs in Cal-X have very low luminosities.We compare this value of L_ mech to the rate of turbulent energy dissipation in core A.Following Mac Low (1999) and Maury et al. (2009), we use the one-dimensional 13 velocity dispersion determined for core A, σ≈ 1.8, to estimate the turbulent energy dissipation rate,L_ turb = 1/2 Mσ^2/R/σ,where M=84 and R=0.29 pc are the mass and size of the core.We find that L_ turb≈ 0.14 L_⊙, about a factor of 5–35 times greater than the mechanical luminosity of the outflow.These are lower limits, since much of the mechanical luminosity will be radiated away in shocks.Thus, it appears that the outflow is not a dominant contributor to the observed turbulence in the region.§.§ Origin of Cal-XThe two most massive cores, A and B, located northward of the filaments, have a combined mass of ∼ 143.In <ref>, we estimated the mass inflow rates of the SE-fil and SW-fil to be Ṁ≈ 10.7 and 38.1  Myr^-1, respectively.Supposing that the filaments have been funneling gas to the massive cores at a constant rate, and assuming they are the main suppliers of gas to the cores (as opposed to accretion, for example), it would have taken roughly 2.9 Myr for the cores to acquire their current total mass.This is comparable to the median age of 2± 1 Myr of YSOs observed in GMCs (e.g., Covey et al. 2010).It is not obvious, however, whether the filaments formed before or after the sub-region embedded with the massive cores or if the regions are coeval.If the filaments existed first, then perhaps the present mass of the cores is primarily due to the infall of gas from the filaments.This scenario seems unlikely, though, given that all of the cores within SE-fil and SW-fill are starless and only one is self-gravitating, which suggests that the filaments could not be older than the region containing the protostars.Thus, we suggest that the filaments were born together with or after the massive region to the north, which may have attained the bulk of its present mass by some mechanism that predated the existence of the filaments.In the filament, velocity gradients result from an overall inflow of mass, and if the cores are moving at the same infall velocity as the rest of the material in the filaments, then the filaments may be “raining cores” into the high-mass clump in the north.In this scenario, based on their observed projected distances from the northernmost ends of the filaments, (defined in <ref>), the cores in SE-fil and SW-fil may take ∼ 2–5 Myr and ∼ 1–3 Myr, respectively, to fall into the high-mass clump.The free-fall times of these cores, t_ff=√(3π/32 Gρ), are roughly 0.4–0.5 Myr.The “actual” timescale for a core to traverse the length of a filament, t_ real, depends on the inclination of the filament, as t_ real = t_ obscos i/sin i, where t_ obs is the timescale estimated from observed filament properties.In order to arrive at timescales as low as the core free-fall times of 0.4–0.5 Myr, the filaments would have to be inclined by at least ∼ 70–75^∘.Thus, unless the filaments are highly inclined with respect to our line of sight, or unless the collapse of these cores is being slowed by some mechanism such as turbulent support, one might expect the cores to begin forming stars before (and if) they ultimately “rain” into the high-mass clump.§ SUMMARY AND CONCLUSIONSIn this paper, we have presented 12(J=2-1) and 13(J=2-1) observations of a region of the California GMC with a low level of star formation activity.Mapped in extinction, this ∼ 04× 06 region (∼ 3.1 pc × 6.3 pc at the distance to the CMC) exhibits two prominent filaments, each just over 2 pc in length, radiating from a high-column-density feature embedded with an outflow driven by one or two isolated YSOs.Because of its resemblance to an “X” in the extinction map, we dub this region Cal-X.Our main results are summarized as follows.* We examine the 13 spectra along the filaments and find that they possess velocity gradients along their lengths, with magnitudes of about 0.1 and 0.2 pc^-1 for the southeast and southwest filaments, respectively.The masses-per-unit-length of SE-fil and SW-fil are, respectively, M/L=55 and 65  ^-1.This exceeds the critical thermal mass-per-unit-length, ∼ 10  ^-1 (assuming a sound speed of 0.095 ), above which filaments become gravitationally unstable (Ostriker 1964).However, if we consider that non-thermal motions may support the filaments against gravity, the effective critical mass-per-unit-length defined by the velocity dispersion of the filaments yields (M/L)_ eff=74  ^-1, suggesting that the filaments are subcritical or marginally critical. * Based on the observed coherent velocity structure of the filaments and their spatial proximity to the large gravitational potential well to their north, we suggest that the velocity gradients are best explained by a northward infall of gas.The combined mass inflow rate of the filaments is ∼ 49  ^-1. * We identify eight cores embedded throughout Cal-X, six of which are associated with the filaments.The two largest, most massive cores are located in the high-extinction feature north of the filaments; one is a starless, prestellar core; the other is associated with two bright infrared sources.The background-subtracted core masses range from ∼ 3.5–84 , and 7 out of 8 of them have virial parameters α_ vir=3–7, implying that they are confined by pressure, not gravity. * From the 12 data, we identify a low-velocity, low-mass outflow in the north of Cal-X.The outflow is spatially coincident with the highest mass core (84 ), and it is associated with the two infrared-bright objects that may be its driving source. The blue- and redshifted lobes of the outflow have projected characteristic velocities of 3.4 and 5.7 , momentum fluxes of 1.4 and 7.8   Myr^-1, and mechanical luminosities of 0.4× 10^-3 and 3.6× 10^-3 L_⊙.The outflow appears to be too weak to contribute significantly to the turbulence in the core or to prevent the core from undergoing gravitational collapse. * Based on the available observations, we propose that the Cal-X filaments were born at the same time or after the massive region to their north, which may have attained the bulk of its present mass by some mechanism that predated the existence of the filaments. Therefore, given the presence of young YSOs in Cal-X, the filaments are likely to have an age of at least 2± 1 Myr.We thank Carlos Román-Zúñiga for providing us with the near-infrared image in Figure <ref>.We thank an anonymous referee and Phil Myers for helpful suggestions that improved earlier drafts of this paper. The Heinrich Hertz Submillimeter Telescope is operated by the Arizona Radio Observatory, which is part of Steward Observatory at the University of Arizona, and is supported in part by grants from the National Science Foundation. N. Imara is supported by the Harvard-MIT FFL Postdoctoral Fellowship. Alves, J., Lada, C. J., & Lada, E. A. 1999, , 515, 265 André, P., Ward-Thompson, D., Barsony, M., et al. 2000, in Protostars and Planets IV, ed. V. Mannings, A. P. Boss, & S. S. Russell (Tucson, AZ: Univ. Arizona Press), 59 André, P., Men'shchikov, A., Bontemps, S., et al. 2010, A&A, 518, L102 André, P., Di Francesco, J., Ward-Thompson, D., et al. 2014, in Protostars and Planets VI, ed. H. Beuther et al. (Tucson, AZ: Univ. Arizona Press), 27 André, P., Revéret, V., Könyves, V. et al. 2016, A&A, 592, 54Andrews, S. M., & Wolk, S. J. 2008, in ASP Monograph Ser. 5, Handbook of Star Forming Regions, Vol. 2, The Southern Sky, ed. B. Reipurth (San Francisco, CA: ASP), 390 Arzoumanian, D., André, P., Didelon, P. et al. 2011, A&A, 529, L6 Arzoumanian, D., André, P., Peretto, N., & Könvyes, V., 2013, A&A, 553, A119 Bally, J., Devine, D., & Reipurth, B. 1996, , 473, 49 Bertoldi, F., & McKee, C. 1992, , 395, 140 Beuther, H., Ragan, S. E., Johnston, K., et al. 2015, A&A, 584, 67Bieging, J. H., & Peters, W. L., , 196, 18 Bohlin, R. C., Savage, B. D., & Drake, J. F. 1978, , 224, 132 Broekhoven-Fiene, H., Matthews, B. C., Harvey, P. M., et al. 2014, , 786, 37 Covey, K. R., Lada, C. J., Román-Zúñiga, C., et al. 2010, , 722, 971 Dickman, R. L. 1978, , 37, 407 Di Francesco, J., Evans, N. J., Caselli, P., et al. 2007, in Protostars and Planets V, ed. B. Reipurth, D. Jewitt, & K. Keil (Tucson, AZ: Univ. Arizona Press), 17 Dobashi, K., Uehara, H., Kandori, R., et al. 2005, PASJ, 57, 1 Duarte-Cabral, A., Fuller, G. A., Peretto, N., et al. 2010, A&A, 519, A27 Fiege, J. D., & Pudritz, R. E. 2000 , 311, 85 Frerking, M. A., Langer, W. D., & Wilson, R. W. 1982 , 262, 590 Goodman, A. A., Pineda, J. E., & Schnee, S. L. 2009, , 692, 91 Griffin, M. J., Abergel, A., Abreu, A., et al. 2010, A&A, 518, L3 Hacar, A., & Tafalla, M. 2011, A&A, 533, A34 Hacar, A., Kainulainen, J., Tafalla, M., Beuther, H., & Alves, J. 2016, A&A, 587, A97 Harvey, P. M., Fallscheer, C., Ginsburg, A., et al. 2013, , 764, 133 Kirk, H., Myers, P. C., Bourke, T. L., et al. 2013, , 766, 115 Kirk, H., Klassen, M., Pudritz, R., & Pillsworth, S. 2015, , 802, 75 Knee, L. B. G., & Sandell, G. 2000, A&A, 361, 671 Kong, S., Lada, C. J. , Lada, E. A., et al. 2015, , 805, 58 Könvyes, V., André, P., Men'shchikov, A., et al. 2015, A&A, 584, 91 Kutner, M. L., & Ulich, B. L. 1981, , 250, 341 Lada, C. J. 1985, ARA&A, 23, 267 Lada, C. J., Lada, E. A., Clemens, D. P., & Bally, J. 1994, , 429, 694 Lada, C. J., Lombardi, M., & Alves, J. F. 2009, , 703, 52 Lada, C. J., Lombardi, M., & Alves, J. F. 2010, , 724, 687 Lombardi, M., Lada, C. J., & Alves, J. F. 2010, A&A, 512, A67 Lombardi, M., Bouy, H., Alves, J., & Lada, C. J. 2014, A&A, 566, 45 Mac Low, M.-M. 1999, , 524, 169 Margulis, M., Lada, C. J., & Snell, R. L. 1988, , 333, 316 Markwardt, C. B. 2009, in ASP Conf. Ser. 411, ed. D. A. Bohlender, D. Durand, & P. Dowler, 251 Maury, A. J., André, P., & Li, Z.-Y. 2009, A&A, 499, 175 Myers, P. C. 2009, , 700, 1609 Ostriker, J. 1964, , 140, 1056 Ott, S. 2010, in ASP Conf. Ser. 434, Astronomical Data Analysis Software and Systems XIX, ed. Y. Mizumoto, K.-I. Morita, & M. Ohishi (San Francisco, CA: ASP), 139 Peretto, N., Fuller, G. A., André, P., et al. 2014, A&A, 561, A83 Reipurth, B., & Bally, J. 2001, ARA&A, 39, 403 Rohlfs, K., & Wilson, T. L. 2004, Tools of Radio Astronomy (Berlin: Springer) Schneider, N., Csengeri, T., Hennemann, M., et al. 2012, A&A, 540, 11 Scoville, N. Z., Sargent, A. I., Sanders, D. B., et al. 1986, , 303, 416 Ward-Thompson, D., Kirk, J. M., André, P., et al. 2010, A&A, 518, 92 Williams, J. P., de Geus, E. J., & Blitz, L. 1994, , 428, 693 Wilson, T. L., & Matteucci, F. 1992, A&ARv, 4, 1
http://arxiv.org/abs/1704.08691v1
{ "authors": [ "Nia Imara", "Charles Lada", "John Lewis", "John H. Bieging", "Shuo Kong", "Marco Lombardi", "Joao Alves" ], "categories": [ "astro-ph.GA", "astro-ph.SR" ], "primary_category": "astro-ph.GA", "published": "20170427180000", "title": "X Marks the Spot: Nexus of Filaments, Cores, and Outflows in a Young Star-Forming Region" }
]Felix Bü[email protected]]Ivan Lemesh ]Geoffrey S. D. Beach []Department of Materials Science and Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USAFull phase diagram of isolated skyrmions in a ferromagnet [ December 30, 2023 =========================================================Magnetic skyrmions are topological quasi particles of great interest for data storage applications because of their small size, high stability, and ease of manipulation via electric current. Theoretically, however, skyrmions are poorly understood since existing theories are not applicable to small skyrmion sizes and finite material thicknesses. Here, we present a complete theoretical framework to determine the energy of any skyrmion in any material, assuming only a circular symmetric 360 domain wall profile and a homogeneous magnetization profile in the out-of-plane direction. Our model precisely agrees with existing experimental data and micromagnetic simulations. Surprisingly, we can prove that there is no topological protection of skyrmions.We discover and confirm new phases, such as bi-stability, a phenomenon unknown in magnetism so far. The outstanding computational performance and precision of our model allow us to obtain the complete phase diagram of static skyrmions and to tackle the inverse problem of finding materials corresponding to given skyrmion properties, a milestone of skyrmion engineering. Full phase diagram of isolated skyrmions in a ferromagnet [ December 30, 2023 =========================================================§ INTRODUCTION Magnetic skyrmions are spin configurations with spherical topology <cit.>, typically manifesting as circular domains with defect-free domain walls (DWs) in systems with otherwise uniform out-of-plane magnetization. Skyrmions are the smallest non-trivial structures in magnetism and they behave like particles <cit.>, which makes them of fundamental interest and of practical utility for high-density data storage applications <cit.>. Skyrmions have been investigated for many decades <cit.>, but only recently has attention shifted to the detailed DW structure and thereby-determined topology. Two factors have driven this trend: technological advances allowing for direct imaging of the spin structure <cit.> and the discovery that the Dzyaloshinskii-Moriya interaction (DMI) can be used to stabilize that structure. In particular, DMI can lead to a skyrmion global ground state above the Curie temperature (T_c) in a Ginzburg-Landau theory of a ferromagnet <cit.>. While stray fields and external fields are not included in that model, they can further stabilize skyrmions up to room temperature (below T_c), as observed in thin film multilayers <cit.>. While homochiral skyrmions were first observed in bulk materials with broken inversion symmetry, multilayers stacks, such as such as Pt/Co/Ir <cit.>, Ta/CoFeB/TaO_x <cit.>, Pt/Co/Ta <cit.>, and Pt/CoFeB/MgO <cit.> with arbitrary repetitions of these layers, have seen increasing popularity recently.In such structures, inversion symmetry breaking and spin-orbit coupling at the ferromagnet/heavy-metal interface can lead to a strong DMI that promotes magnetic skyrmions with well-defined chirality. Multilayers are particularly attractive since the relevant energy terms (e.g., anisotropy, DMI, magnetostatic) can be engineered by controlling the interfaces and the volume fraction of magnetic material in the stack.However, with this flexibility comes a considerable engineering challenge in that the parameter space for engineering is overwhelmingly large: It has six dimensions (five material parameters plus external magnetic field) and thanks to the interfacial origin of many magnetic properties and to the effective medium scaling of these properties with the thickness of non-magnetic spacer layers <cit.>, all of these dimensions can be tuned individually. Blind experimental investigation of this parameter space is hence prohibitive. Existing theories are of limited help, since they either involve crude approximations that fail to reproduce the wealth of skyrmion states even qualitatively <cit.> or contain unsolved complicated integrals and differential equations, which renders the evaluation of the theory extremely computationally expensive and slow (for instance, micromagnetic simulations and Refs. <cit.>). In particular, most theories ignore the non-local nature of stray field interactions <cit.>, which is responsible for many interesting features in the intermediate film thickness regime. A single coherent theorythat quickly and accurately predicts the existence and the properties of isolated skyrmions for any given point in the six-dimensional parameter space remains elusive. Here, we provide a theoretical modelfor all energy terms of an isolated skyrmion in a given material with infinite in-plane extent. The model is fully analytical and accurate to 1 in the entire parameter space. Thanks to the analytical nature, we can find energy minima extremely quickly by searching for roots of the partial derivatives. The resulting equilibrium states show excellent agreement with simulations and experiments. Using our theory, we find exotic new states, such as multi-stabilities (co-existence of skyrmions with different properties in the same sample under the same conditions), zero stiffness skyrmions, and zero field skyrmions. These new states can have many novel applications, some of which we suggest here. We obtained the minimum energy skyrmion states for millions of material parameter and field combinations in less than a week, yielding the full phase diagram of skyrmion states and demonstrating that our model is suitable to solve the inverse problem of engineering skyrmion properties through material selection.Our theory takes as input the uniaxial anisotropy constant K_u, saturation magnetization M_s, exchange constant A, interface and bulk DMI strengths D_i and D_b, magnetic layer thickness d, and applied out-of-plane field H_z. Given these parameters, we derive the energy function that determines the spin structure of skyrmions in any material. In general, stable skyrmions correspond to sufficiently deep minima of the energy functional E[𝐦] of all possible spin structures 𝐦(𝐫), where 𝐦 is the unit magnetization vector and 𝐫 is the position vector.In practice, minimizing the energy functional in its raw form, as done in micromagnetic simulations, is prohibitively slow for systematic skyrmion engineering. Our simple and efficient analytical model is enabled by the recent experimental confirmation <cit.> of an analytic and universal 360 DW model <cit.> for the spin structure 𝐦(𝐫) of all skyrmions. The full analytic model for the total energy function is provided in the supplemental information, along with a detailed discussion of how to solve the integrals of the individual interactions. Here, we focus on its implications.§ EFFECTIVE ENERGY CONTRIBUTIONS The skyrmion spin structure (Fig. <ref>a) can be accurately described by four parameters: radius R, DW width Δ, DW angle ψ, and topological charge N. R and Δ are independent parameters that determine the magnetization profile m_z(x,y), whereas ψ specifies whether the in-plane component of the DW spins is radial (Néel, ψ=0,π), azimuthal (Bloch, ψ=π/2,3π/2), or intermediate (transient). For large ρ=R/Δ, skyrmions consist of an extended out-of-plane magnetized domain bounded by a narrow circular DW, while for ρ∼ 1 the inner domain is reduced to a point-like core resembling a magnetic vortex.We refer to these limiting cases as bubble skyrmions and vortex skyrmions, respectively, consistent with the literature, but note that many skyrmions observed recently <cit.> showed intermediate values of ρ and cannot be classified distinctly.The case of bubble skyrmions is readily treated analytically through the so-called wall-energy model known for a long time <cit.>.In this limit, the skyrmion energy simplifies to E=2π d σ_DW R+aR-bRln(R/d)+cR^2 with 2π d σ_DW R being the DW energy (σ_DW is the energy density of an isolated DW), aR+bRln(R/d) the Zeeman-like surface stray field energy, and cR^2 the Zeeman energy.Here a,b,c are material-dependent parameters that include corrections to the original model <cit.> to account for DMI, volume charges, and finite Δ (see supplemental information). The crucial assumption of Eq. (<ref>) is that σ_DW, Δ, and ψ do not depend on R and that hence Δ and ψ can be obtained from minimizing σ_DW while R is the minimum of E assuming a constant σ_DW. The simplicity of this wall-energy model is reason for its popularity <cit.>. However, our accurate model shows that Δ can change with R even up to unexpectedly large radii of R=100Δ. Indeed, we find that the wall-energy model fails quantitatively and qualitatively for almost all skyrmions that are of interest today. Our model fully accounts for the correlation between radius, DW width and DW angle, thereby providing a single theoretical framework that accurately describes any skyrmion. Importantly, our model reveals unexpected and qualitatively new exotic behaviors that are precluded by the approximations inherent in prior treatments.Our analytic equations for the total energy function can be numerically minimized with respect to R, Δ, and ψ, for a given set of material parameters and external field H_z, to obtain the equilibrium skyrmion configuration. The predictions of our model agree precisely with micromagnetic simulations and with the experimental data of Romming et al. <cit.>, see Fig. <ref>b. Note that fields are negative in our convention, antiparallel to the skyrmion core. Heuristically, we find that the function R(H_z)≈a_1/(-H_z)^a_2+a_3 fits the field dependence of the equilibrium radius for a wide range of parameters.Our model yields the energy of a given skyrmion configuration in less than a millisecond on a regular personal computer, thereby providing dramatic improvement over micromagnetic simulations in terms of computation speed. Moreover, it gives access to information that cannot be obtained by simulations.For example, by virtue of taking partial derivatives, the model allows to minimize Δ and ψ for any non-equilibrium skyrmion radius and, therefore, to obtain the energy as a function of radius E(R). Micromagnetic simulations can only yield the equilibrium R,Δ,ψ. The E(R) curve directly relates to the skyrmion stability (by quantifying the energy barriers) and to its rigidity (by the curvature near the energy minimum). Figure <ref>c shows E(R) calculated for the skyrmion described in Fig. <ref>b at μ_0 H_z=-2T, which exhibits a single minimum corresponding to an isolated skyrmion. The only other stable state is the ferromagnetic ground state at E=0. Despite the different topology of the skyrmion and the ferromagnetic state, there is a continuous path from one to the other, which goes through the singular R=0 state. The singluar R=0 state does not have a topology, which is why topological quantization is lifted here. Remarkably, the energy along the path towards R=0 remains finite, in contrast to earlier believes <cit.>. At R=0, the skyrmion energy takes a universal value, E(R=0)=E_0=27.3Ad. The zero radius energy depends only on A and d and not on DMI. By finding a topologically valid and energetically possible path to annihilation we prove that skyrmions are not protected by their topology, even in continuum models and in the presence of strong DMI. Note that a finite value for the energy barrier has been found before <cit.>, but the role of DMI and implications for the topological stability were not discussed. The skyrmion of Fig. <ref>c exhibits a finite annihilation energy E_a = E_0-E(R) and a nucleation energy barrier E_n = E_0. Note that E_n and E_a overestimate the energy barriers because skyrmions can deform in a way that is not covered by the 360 DW model underlying our calculations and therefore reduce the nominal energy barrier <cit.>. However, previous studies <cit.> and our own micromagnetic simulations indicate that the reduction of the energy barrier due to deformations is smaller than 2Ad (even though sometimes extremely small cell sizes are required). This is in excellent agreement with the observation of Belavin and Polyakov <cit.> that the minimum energy of a skyrmion state in a model that includes only exchange energy is 8π Ad, which is approximately 2Ad smaller than the zero radius exchange energy of our 360 DW theory. Including thermal effects, we therefore consider minima of the total energy to be stable in our discussions below if E_a>2Ad+k_BT.All energy contributions to the total energy can be classified into two categories, DW and bulk energies, and inspection of the individual energy terms allows one to identify the mechanism responsible for skyrmion stability. At large radii, DW energies are linear in R, whereas all non-linear terms are bulk energies. Exchange, anisotropy, DMI, and volume stray field energies are DW energies. The Zeeman energy is a bulk energy. Surface stray fields contribute to both categories: The DW contribution leads to an effective reduction of the anisotropy and the bulk contribution effectively reduces the external field.The decomposition of the total energy into DW and bulk terms is depicted in Figs. <ref>d-f. Minima in E(R) can exist only if energy terms with positive slope are compensated by terms with negative slope, and the latter can only arise through DMI and surface stray fields.One can therefore classify skyrmions as DMI (stray field) stabilized if the sum of DW (bulk) energies has a negative slope at the equilibrium radius R_eq.Our model hence provides the first mathematical basis for a terminology commonly used in the literature without rigorous justification <cit.>. § NEW PHASES We now apply our model to gain further insight into skyrmion properties and to analyze the phase diagram of static isolated skyrmions. Features of particular interest are highlighted in Fig. <ref>. First, Fig. <ref>a compares the radius of a Bloch skyrmion in a material without DMI to a Néel skyrmion in a high DMI material (with D_i>D_cψ^SW, where D_cψ^SW is the critical DMI value required to stabilize Néel DWs; see supplementary information). It is widely believed that small skyrmions exist only in materials with large DMI <cit.>. Indeed, our results indicate that skyrmions with the sharpest core (quantitatively, values of ρ< 1.3) and skyrmions with sub-nanometer radius can only be found in materials with sufficiently large DMI. However, both skyrmions in Fig. <ref>a have a similar R(H_z) dependence and collapse at a similar radius of less than 10nm. Near collapse, both skyrmions approach vortex-like configurations, which is agreement with recent experimental observations of vortex-like skyrmions in non-DMI materials <cit.>. The distinction between the two types is hardly possible based on their size with the resolution of most of the imaging tools available today and it is not possible to deduce the value of ψ from measurements of R. Figures <ref>b,c depict the phase diagram of DMI-stabilized and stray-field-stabilized skyrmions. Surprisingly, E(R) can exhibit multiple minima separated by energy barriers sufficiently high that they are individually stable, leading to a bi-stability (discussed in detail in the supplemental information). Bi-stability exists in a small pocket of the phase diagram, wherein two types of skyrmions can coexist in a material under identical conditions (Figs. <ref>b, c). We find this pocket in the stray-field stabilized part of the phase diagram where the phase boundary of the instability region has a cusp. The two types of skyrmions in the bi-stability region have very different properties (Fig <ref>d), confirmed by micromagnetic simulations: Their radii differ by more than one order of magnitude and their spin structure is Néel-like for the small skyrmion and transient for the large skyrmion. The transient value of ψ for the larger skyrmion originates in a wider domain wall, which increases the importance of the volume stray field energy that favors a Bloch-like spin orientation. The different size and domain wall angle can be used to move the skyrmions in non-collinear directions by spin orbit torques, as detailed in the supplemental information.The unexpected emergence of multiple minima in E(R) is a consequence of introducing Δ and ψ as free parameters, resulting in Δ(R) being nonlinear and sometimes nonmonotonic. The existence of degenerated isolated domain states is new in the entire field of magnetism. Decomposition of E(R) into DW and bulk energies (Fig. <ref>e) reveals that the origin of this phenomenon is that each term individually exhibits a minimum. The minimum in E(R) at small radius derives from the minimum in the DW energy terms, shifted towards larger radii by the negative-sloped bulk energies and vice versa for the large radius minimum. This observation helps explain why the phase boundaries between stray field stabilized and DMI stabilized skyrmions in Fig. <ref>b are vertical and horizontal (see also supplemental figure S1). The horizontal line marks the critical DMI value above which σ_DW is negative, i.e., where the DW energies have a negative slope everywhere and all minima are DMI stabilized. The vertical line indicates the critical field value above which the applied field fully compensates the Zeeman-like surface stray field, meaning that the bulk energies are always positive with a positive slope beyond that point and again all minima are DMI stabilized. In any of these cases, either the DW or the bulk energies cease to have a minimum, which finally also explains why we find the bi-stable phase pocket in the stray field stabilized phase.The last peculiar phenomenon we uncover in our analysis is the existence of zero stiffness skyrmions. Figure <ref>f shows E(R) for a system that manifests three energy minima, where the maxima between these minima can easily be overcome by thermal energies at room temperature. In this particular example, the skyrmion radius can thermally fluctuate between 2nm and 11nm, such that it exhibits effectively zero stiffness with respect to variations of radius within this range. We expect that such skyrmions have a very low resonance frequency associated with their breathing mode, which could be exploited in skyrmion resonators <cit.> and should have impact on their inertia <cit.> and on skyrmion Hall angle <cit.>.§ APPLICATIONSWe now consider the design of skyrmions suitable for applications, such as racetrack-type memory devices in which bit sequences are encoded by the presence and absence of skyrmions that can be shifted by electric current <cit.>. Three key attributes for such applications are (i) small bit sizes, (ii) long term thermal stability, and (iii) skyrmion stability in zero applied field. Indeed, we find a section in the phase diagram that meets all these requirements, as illustrated in Fig. <ref>. As sketched in the inset of Fig. <ref>a, zero field skyrmions are local energy minima bounded by two annihilation energy barriers E_a^0 and E_a^∞ that prevent shrinking to zero size and infinite growth, respectively. By subtracting the internal deformation energy 2Ad from the minimum of E_a^0 and E_a^∞, we obtain the effective annihilation energy barrier E_a^eff that can be used to estimate long term thermal stability. For properly chosen material parameters, E_a^eff can exceed the threshold 40k_BT (Fig. <ref>b) required for commercial storage devices. The corresponding radius is <20nm. Note that all energy contributions generally become larger at larger radii. Therefore, fluctuations of any material parameter or of the field affect the energy barrier E_a^∞ much stronger than E_a^0, see inset of Fig. <ref>a. This is why E_a^eff increases slowly for increasing DMI values before it drops very rapidly after passing the maximum of E_a^eff, see inset of Fig. <ref>b. It is therefore advantageous to have a DMI value slightly below the maximum E_a^eff to ensure that E_a^eff is robust against moderate external fields. Finally, all zero field skyrmions have E>0, consistent with earlier assessments excluding ground state zero field skyrmions below the Curie temperature <cit.>. It is reasonably easy to obtain E_a^eff=10Ad, which explains why the largest energy barriers can be obtained when increasing A (or d).§ THE FULL PHASE DIAGRAMTo demonstrate the power of our model and to understand the effect of material parameters on skyrmion properties, we derived and analyzed the properties of skyrmions as a function of more than one million different material parameters and magnetic fields, a task that is impossible with existing theoretical tools. Fig. <ref> illustrates some of the most interesting features of the derived full phase diagram. Specifically, the figure analyzes a magnetic multilayer in which the non-magnetic spacer layers (e.g., Pt and Ta) are three times as thick as the magnetic layers. All room-temperature skyrmion systems today are based on such multilayers <cit.>. We employ the effective medium approach <cit.> to treat these multilayers. The total magnetic material thickness in such films is between 1nm and 100nm, the interfacial DMI strength is below 4mJ/m^2 and the anisotropy quality factor Q=2K_u/μ_0M_s^2 is typically between 1 and 2. For this parameter range, we show in the first two rows of Fig. <ref> the radius R and the DW angle ψ of the smallest possible skyrmions, i.e., under the maximum field just before collapse. Below, we present the means of stabilization (DMI or stray fields). Similar diagrams with variable A, M_s, and non-magnetic spacer layer thickness are provided in the supplemental information.The most striking common feature of all panels in Fig. <ref> is a clear phase boundary, i.e., a (thickness-dependent) critical DMI value above which the displayed quantity changes abruptly. For instance, for D_i>D_cr, skyrmions are extremely small (∼1nm) near collapse. At slightly lower DMI the skyrmion collapse radius can abruptly increase to several micometers. Note that this D_cr has nothing to do with the DMI value for which the energy of an isolated straight wall becomes negative. The DW angle ψ shows a qualitatively similar but quantitatively unrelated trend: Above a critical DMI value of D_cψ, skyrmions are of Néel type when they collapse. The transition in ψ is not as sharp as for R and, importantly, D_cψ is consistently smaller than D_cr, implying that extremely small skyrmions are always of Néel type. Note also that small skyrmions are more likely to be of Néel type than straight walls in the same material. In other words, the critical DMI value for finding isolated Néel walls D_cψ^SW is much larger than D_cψ. The region between D_cψ and D_cψ^SW is where the dependence of ψ on the skyrmion size is most pronounced and bi-stable states are most likely to have different spin orientations.Sub-10m skyrmions exist in almost the entire phase diagram. Materials with purely DMI stabilized skyrmions exist, but almost exclusively at very small values of Q. Already at Q=1.4, which is a typical value for cobalt based multilayers <cit.>, purely DMI stabilized skyrmions exist only for DMI values larger than 4mJ/m^2, well beyond experimentally-reported values.Hence we conclude that most skyrmions investigated experimentally so are best described as stray field stabilized.Finally, comparing the additional phase diagrams in the supplemental information, we can qualitatively note that small skyrmions are favored by a low anisotropy, a low M_s, a small exchange constant, a large DMI value, and sizable non-magnetic spacer layers. Low M_s, low A and thick spacer layers also lead to more abundant bi-stability regions, but note that here larger Q values are beneficial.§ CONCLUSIONS AND OUTLOOK In summary, we presented an analytical model that allows exploration of the entire static phase diagram of isolated magnetic skyrmions via rapid, systematic calculations. We expect many new applications to arise from the exotic states found here, beyond what we already suggested. In principle, our model assumes infinite films and the behaviour in finite sized elements can be different <cit.>. However, in most cases confinement increases the stability of skyrmions, as long as skyrmions still fit into the element. Therefore, our predictions can be considered conservative and applicable to most nanostructures as well. Also, skyrmions in anti-ferromagnets <cit.> are covered by our theory by setting M_s to zero. Still, some open challenges remain. For instance, the dynamics of skyrmions, and the effects of in-plane fields, are not yet covered by our model. But we believe that the concepts presented here to solve the integral equations pave the path to tackle those issues as well. § ACKNOWLEDGEMENTS This work was supported by the U.S. Department of Energy (DOE), Office of Science, Basic Energy Sciences (BES) under Award #DE-SC0012371. FB thanks Alexander Stottmeister, Benjamin Krüger, and Kai Litzius for fruitful discussions and the German Science Foundation for financial support under grant number BU 3297/1-1.10 url<#>1urlprefixURLnagaosa_topological_2013 authorNagaosa, N. & authorTokura, Y. titleTopological properties and dynamics of magnetic skyrmions. journalNature Nanotechnology volume8, pages899–911 (year2013).buttner_magnetic_2016 authorBüttner, F. & authorKläui, M. titleMagnetic Skyrmion Dynamics. In booktitleSkyrmions, Series in Material Science and Engineering, pages211–238 (publisherCRC Press, year2016).tomasello_strategy_2014 authorTomasello, R. et al. titleA strategy for the design of skyrmion racetrack memories. journalScientific Reports volume4, pages6784 (year2014).makhfudz_inertia_2012 authorMakhfudz, I., authorKrüger, B. & authorTchernyshyov, O. titleInertia and Chiral Edge Modes of a Skyrmion Magnetic Bubble. journalPhysical Review Letters volume109, pages217201 (year2012).everschor_current-induced_2011 authorEverschor, K., authorGarst, M., authorDuine, R. A. & authorRosch, A. titleCurrent-induced rotational torques in the skyrmion lattice phase of chiral magnets. journalPhysical Review B volume84, pages064401 (year2011).iwasaki_current-induced_2013 authorIwasaki, J., authorMochizuki, M. & authorNagaosa, N. titleCurrent-induced skyrmion dynamics in constricted geometries. journalNature Nanotechnology volume8, pages742–747 (year2013).sampaio_nucleation_2013 authorSampaio, J., authorCros, V., authorRohart, S., authorThiaville, A. & authorFert, A. titleNucleation, stability and current-induced motion of isolated magnetic skyrmions in nanostructures. journalNature Nanotechnology volume8, pages839–844 (year2013).buttner_dynamics_2015 authorBüttner, F. et al. titleDynamics and inertia of skyrmionic spin structures. journalNature Physics volume11, pages225–228 (year2015).fert_skyrmions_2013 authorFert, A., authorCros, V. & authorSampaio, J. titleSkyrmions on the track. journalNature Nanotechnology volume8, pages152–156 (year2013).wiesendanger_nanoscale_2016 authorWiesendanger, R. titleNanoscale magnetic skyrmions in metallic films and multilayers: a new twist for spintronics. journalNature Reviews Materials pages16044 (year2016).rosch_skyrmions:_2013 authorRosch, A. titleSkyrmions: Moving with the current. journalNature Nanotechnology volume8, pages160–161 (year2013).malozemoff_magnetic_1979 authorMalozemoff, A. P. & authorSlonczewski, J. C. titleMagnetic Domain Walls in Bubble Materials (publisherAcademic Press, addressNew York, year1979).heinze_spontaneous_2011 authorHeinze, S. et al. titleSpontaneous atomic-scale magnetic skyrmion lattice in two dimensions. journalNature Physics volume7, pages713–718 (year2011).yu_real-space_2010 authorYu, X. Z. et al. titleReal-space observation of a two-dimensional skyrmion crystal. journalNature volume465, pages901–904 (year2010).romming_field-dependent_2015 authorRomming, N., authorKubetzka, A., authorHanneken, C., authorvon Bergmann, K. & authorWiesendanger, R. titleField-Dependent Size and Shape of Single Magnetic Skyrmions. journalPhysical Review Letters volume114, pages177203 (year2015).rosler_spontaneous_2006 authorRößler, U. K., authorBogdanov, A. N. & authorPfleiderer, C. titleSpontaneous skyrmion ground states in magnetic metals. journalNature volume442, pages797–801 (year2006).moreau-luchaire_additive_2016 authorMoreau-Luchaire, C. et al. titleAdditive interfacial chiral interaction in multilayers for stabilization of small individual skyrmions at room temperature. journalNature Nanotechnology volume11, pages444–448 (year2016).boulle_room-temperature_2016 authorBoulle, O. et al. titleRoom-temperature chiral magnetic skyrmions in ultrathin magnetic nanostructures. journalNature Nanotechnology volume11, pages449–454 (year2016).woo_observation_2016 authorWoo, S. et al. titleObservation of room-temperature magnetic skyrmions and their current-driven dynamics in ultrathin metallic ferromagnets. journalNature Materials volume15, pages501–506 (year2016).yu_room-temperature_2016 authorYu, G. et al. titleRoom-Temperature Creation and Spin–Orbit Torque Manipulation of Skyrmions in Thin Films with Engineered Asymmetry. journalNano Letters volume16, pages1981 (year2016).jiang_blowing_2015 authorJiang, W. et al. titleBlowing magnetic skyrmion bubbles. journalScience volume349, pages283–286 (year2015).jiang_direct_2016 authorJiang, W. et al. titleDirect observation of the skyrmion Hall effect. journalNature Physics volume13, pages162–169 (year2016).litzius_skyrmion_2016 authorLitzius, K. et al. titleSkyrmion Hall effect revealed by direct time-resolved X-ray microscopy. journalNature Physics volume13, pages170–175 (year2016).lemesh_accurate_???? authorLemesh, I., authorBüttner, F. & authorBeach, G. S. D. titleAccurate model of the stripe domain phase of perpendicularly magnetized multilayers .kooy_experimental_1960 authorKooy, C. & authorEnz, U. titleExperimental and Theoretical Study of the Domain Configuration in Thin Layers of BaFe_12O_19. journalPhilips Res. Repts. volume15, pages7–29 (year1960).ezawa_giant_2010 authorEzawa, M. titleGiant Skyrmions Stabilized by Dipole-Dipole Interactions in Thin Ferromagnetic Films. journalPhysical Review Letters volume105, pages197202 (year2010).guslienko_skyrmion_2015 authorGuslienko, K. titleSkyrmion State Stability in Magnetic Nanodots With Perpendicular Anisotropy. journalIEEE Magnetics Letters volume6, pages1–4 (year2015).rohart_skyrmion_2013 authorRohart, S. & authorThiaville, A. titleSkyrmion confinement in ultrathin film nanostructures in the presence of Dzyaloshinskii-Moriya interaction. journalPhysical Review B volume88, pages184422 (year2013).tu_determination_1971 authorTu, Y.-O. titleDetermination of Magnetization of Micromagnetic Wall in Bubble Domains by Direct Minimization. journalJournal of Applied Physics volume42, pages5704–5709 (year1971).bogdanov_thermodynamically_1994 authorBogdanov, A. & authorHubert, A. titleThermodynamically stable magnetic vortex states in magnetic crystals. journalJournal of Magnetism and Magnetic Materials volume138, pages255–269 (year1994).kiselev_chiral_2011 authorKiselev, N. S., authorBogdanov, A. N., authorSchäfer, R. & authorRößler, U. K. titleChiral skyrmions in thin magnetic films: new objects for magnetic storage technologies? journalJournal of Physics D: Applied Physics volume44, pages392001 (year2011).leonov_properties_2016 authorLeonov, A. O. et al. titleThe properties of isolated chiral skyrmions in thin magnetic films. journalNew Journal of Physics volume18, pages065003 (year2016).braun_fluctuations_1994 authorBraun, H.-B. titleFluctuations and instabilities of ferromagnetic domain-wall pairs in an external magnetic field. journalPhysical Review B volume50, pages16485–16500 (year1994).yu_variation_2016 authorYu, X., authorTokunaga, Y., authorTaguchi, Y. & authorTokura, Y. titleVariation of Topology in Magnetic Bubbles in a Colossal Magnetoresistive Manganite. journalAdvanced Materials volume29, pages1603958 (year2016).cape_magnetic_1971 authorCape, J. A. & authorLehman, G. W. titleMagnetic Domain Structures in Thin Uniaxial Plates with Perpendicular Easy Axis. journalJournal of Applied Physics volume42, pages5732–5756 (year1971).schott_skyrmion_2016 authorSchott, M. et al. titleThe skyrmion switch: turning magnetic skyrmion bubbles on and off with an electric field. journalarXiv:1611.01453(year2016).belavin_metastable_1975 authorBelavin, A. A. & authorPolyakov, A. M. titleMetastable states of two-dimensional isotropic ferromagnets. journalJETP Letters volume22, pages245 (year1975).rohart_path_2016 authorRohart, S., authorMiltat, J. & authorThiaville, A. titlePath to collapse for an isolated Néel skyrmion. journalPhysical Review B volume93, pages214412 (year2016).kiselev_comment_2011 authorKiselev, N. S., authorBogdanov, A. N., authorSchäfer, R. & authorRößler, U. K. titleComment on “Giant Skyrmions Stabilized by Dipole-Dipole Interactions in Thin Ferromagnetic Films”. journalPhysical Review Letters volume107, pages179701 (year2011).schwarze_universal_2015 authorSchwarze, T. et al. titleUniversal helimagnon and skyrmion excitations in metallic, semiconducting and insulating chiral magnets. journalNature Materials volume14, pages478–483 (year2015).parkin_magnetic_2008 authorParkin, S. S. P., authorHayashi, M. & authorThomas, L. titleMagnetic Domain-Wall Racetrack Memory. journalScience volume320, pages190–194 (year2008).wright_crystalline_1989 authorWright, D. C. & authorMermin, N. D. titleCrystalline liquids: the blue phases. journalReviews of Modern Physics volume61, pages385–432 (year1989).barker_antiferromagnetic_2015 authorBarker, J. & authorTretiakov, O. A. titleAntiferromagnetic Skyrmions. journalarXiv:1505.06156(year2015).zhang_antiferromagnetic_2016 authorZhang, X., authorZhou, Y. & authorEzawa, M. titleAntiferromagnetic Skyrmion: Stability, Creation and Manipulation. journalScientific Reports volume6, pages24795 (year2016).
http://arxiv.org/abs/1704.08489v1
{ "authors": [ "Felix Büttner", "Ivan Lemesh", "Geoffrey S. D. Beach" ], "categories": [ "cond-mat.mtrl-sci" ], "primary_category": "cond-mat.mtrl-sci", "published": "20170427094513", "title": "Full phase diagram of isolated skyrmions in a ferromagnet" }
SphereFace: Deep Hypersphere Embedding for Face Recognition Weiyang Liu1 Yandong Wen2 Zhiding Yu2 Ming Li3 Bhiksha Raj2 Le Song11Georgia Institute of Technology2Carnegie Mellon University3Sun Yat-Sen [email protected], {yandongw,yzhiding}@andrew.cmu.edu, [email protected] ==========================================================================================================================================================================================================================================================================emptyThis paper addresses deep face recognition (FR) problem under open-set protocol, where ideal face features are expected to have smaller maximal intra-class distance than minimal inter-class distance under a suitably chosen metric space. However, few existing algorithms can effectively achieve this criterion. To this end, we propose the angular softmax (A-Softmax) loss that enables convolutional neural networks (CNNs) to learn angularly discriminative features. Geometrically, A-Softmax loss can be viewed as imposing discriminative constraints on a hypersphere manifold, which intrinsically matches the prior that faces also lie on a manifold. Moreover, the size of angular margin can be quantitatively adjusted by a parameter m. We further derive specific m to approximate the ideal feature criterion. Extensive analysis and experiments on Labeled Face in the Wild (LFW), Youtube Faces (YTF) and MegaFace Challenge show the superiority of A-Softmax loss in FR tasks. The code has also been made publicly available[See the code at<https://github.com/wy1iu/sphereface>.]. § INTRODUCTION Recent years have witnessed the great success of convolutional neural networks (CNNs) in face recognition (FR). Owing to advanced network architectures <cit.> and discriminative learning approaches <cit.>, deep CNNs have boosted the FR performance to an unprecedent level. Typically, face recognition can be categorized as face identification and face verification <cit.>. The former classifies a face to a specific identity, while the latter determines whether a pair of faces belongs to the same identity. In terms of testing protocol, face recognition can be evaluated under closed-set or open-set settings, as illustrated in Fig. <ref>. For closed-set protocol, all testing identities are predefined in training set. It is natural to classify testing face images to the given identities. In this scenario, face verification is equivalent to performing identification for a pair of faces respectively (see left side of Fig. <ref>). Therefore, closed-set FR can be well addressed as a classification problem, where features are expected to be separable. For open-set protocol, the testing identities are usually disjoint from the training set, which makes FR more challenging yet close to practice. Since it is impossible to classify faces to known identities in training set, we need to map faces to a discriminative feature space. In this scenario, face identification can be viewed as performing face verification between the probe face and every identity in the gallery (see right side of Fig. <ref>). Open-set FR is essentially a metric learning problem, where the key is to learn discriminative large-margin features. Desired features for open-set FR are expected to satisfy thecriterion that the maximal intra-class distance is smaller than the minimal inter-class distance under a certain metric space. This criterion is necessary if we want to achieve perfect accuracy using nearest neighbor. However, learning features with this criterion is generally difficult because of the intrinsically large intra-class variation and high inter-class similarity <cit.> that faces exhibit.Few CNN-based approaches are able to effectively formulate the aforementioned criterion in loss functions. Pioneering work <cit.> learn face features via the softmax loss[Following <cit.>, we define the softmax loss as the combination of the last fully connected layer, softmax function and cross-entropy loss.], but softmax loss only learns separable features that are not discriminative enough. To address this, some methods combine softmax loss with contrastive loss <cit.> or center loss <cit.> to enhance the discrimination power of features. <cit.> adopts triplet loss to supervise the embedding learning, leading to state-of-the-art face recognition results. However, center loss only explicitly encourages intra-class compactness. Both contrastive loss <cit.> and triplet loss <cit.> can not constrain on each individual sample, and thus require carefully designed pair/triplet mining procedure, which is both time-consuming and performance-sensitive.It seems to be a widely recognized choice to impose Euclidean margin to learned features, but a question arises: Is Euclidean margin always suitable for learning discriminative face features? To answer this question, we first look into how Euclidean margin based losses are applied to FR.Most recent approaches <cit.> combine Euclidean margin based losses with softmax loss to construct a joint supervision. However, as can be observed from Fig. <ref>, the features learned by softmax loss have intrinsic angular distribution (also verified by <cit.>). In some sense, Euclidean margin based losses are incompatible with softmax loss, so it is not well motivated to combine these two type of losses. In this paper, we propose to incorporate angular margin instead. We start with a binary-class case to analyze the softmax loss. The decision boundary in softmax loss is =2mu =2mu (W_1 - W_2)x + b_1 - b_2 = 0, where W_i and b_i are weights and bias[If not specified, the weights and biases in the paper are corresponding to the fully connected layer in the softmax loss.] in softmax loss, respectively. If we define x as a feature vector and constrain =2mu =2mu W_1 = W_2 = 1 and =2mu =2mu b_1=b_2 = 0, the decision boundary becomes =2mu =2mu x(cos (θ_1) -cos (θ_2)) = 0, where θ_i is the angle between W_i and x. The new decision boundary only depends on θ_1 and θ_2. Modified softmax loss is able to directly optimize angles, enabling CNNs to learn angularly distributed features (Fig. <ref>).Compared to original softmax loss, the features learned by modified softmax loss are angularly distributed, but not necessarily more discriminative. To the end, we generalize the modified softmax loss to angular softmax (A-Softmax) loss. Specifically, we introduce an integer m (=2mu =2mu m ≥ 1) to quantitatively control the decision boundary. In binary-class case, the decision boundaries for class 1 and class 2 become =0mu =0mu x(cos (mθ_1) -cos (θ_2)) = 0 and =0mu =0mu x(cos (θ_1) -cos (mθ_2)) = 0, respectively. m quantitatively controls the size of angular margin. Furthermore, A-Softmax loss can be easily generalized to multiple classes, similar to softmax loss. By optimizing A-Softmax loss, the decision regions become more separated, simultaneously enlarging the inter-class margin and compressing the intra-class angular distribution. A-Softmax loss has clear geometric interpretation. Supervised by A-Softmax loss, the learned features construct a discriminative angular distance metric that is equivalent to geodesic distance on a hypersphere manifold. A-Softmax loss can be interpreted as constraining learned features to be discriminative on a hypersphere manifold, which intrinsically matches the prior that face images lie on a manifold <cit.>. The close connection between A-Softmax loss and hypersphere manifolds makes the learned features more effective for face recognition. For this reason, we term the learned features as SphereFace. Moreover, A-Softmax loss can quantitatively adjust the angular margin via a parameter m, enabling us to do quantitative analysis. In the light of this, we derive lower bounds for the parameter m to approximate the desired open-set FR criterion that the maximal intra-class distance should be smaller than the minimal inter-class distance. Our major contributions can be summarized as follows: (1) We propose A-Softmax loss for CNNs to learn discriminative face features with clear and novel geometric interpretation. The learned features discriminatively span on a hypersphere manifold, which intrinsically matches the prior that faces also lie on a manifold. (2) We derive lower bounds for m such that A-Softmax loss can approximate the learning task that minimal inter-class distance is larger than maximal intra-class distance. (3) We are the very first to show the effectiveness of angular margin in FR. Trained on publicly available CASIA dataset <cit.>, SphereFace achieves competitive results on several benchmarks, including Labeled Face in the Wild (LFW), Youtube Faces (YTF) and MegaFace Challenge 1. § RELATED WORKMetric learning. Metric learning aims to learn a similarity (distance) function. Traditional metric learning <cit.> usually learns a matrix A for a distance metric =2mu =2mu x_1-x_2_A=√((x_1-x_2)^TA(x_1-x_2)) upon the given features x_1,x_2. Recently, prevailing deep metric learning <cit.> usually uses neural networks to automatically learn discriminative features =2mu =2mu x_1,x_2 followed by a simple distance metric such as Euclidean distance =2mu =2mu x_1-x_2_2. Most widely used loss functions for deep metric learning are contrastive loss <cit.> and triplet loss <cit.>, and both impose Euclidean margin to features. Deep face recognition. Deep face recognition is arguably one of the most active research area in the past few years. <cit.> address the open-set FR using CNNs supervised by softmax loss, which essentially treats open-set FR as a multi-class classification problem. <cit.> combines contrastive loss and softmax loss to jointly supervise the CNN training, greatly boosting the performance. <cit.> uses triplet loss to learn a unified face embedding. Training on nearly 200 million face images, they achieve current state-of-the-art FR accuracy. Inspired by linear discriminant analysis, <cit.> proposes center loss for CNNs and also obtains promising performance. In general, current well-performing CNNs <cit.> for FR are mostly built on either contrastive loss or triplet loss. One could notice that state-of-the-art FR methods usually adopt ideas (e.g. contrastive loss, triplet loss) from metric learning, showing open-set FR could be well addressed by discriminative metric learning. L-Softmax loss <cit.> also implicitly involves the concept of angles. As a regularization method, it shows great improvement on closed-set classification problems. Differently, A-Softmax loss is developed to learn discriminative face embedding. The explicit connections to hypersphere manifold makes our learned features particularly suitable for open-set FR problem, as verified by our experiments. In addition, the angular margin in A-Softmax loss is explicitly imposed andcan be quantitatively controlled (e.g. lower bounds to approximate desired feature criterion), while <cit.> can only be analyzed qualitatively.§ DEEP HYPERSPHERE EMBEDDING §.§ Revisiting the Softmax LossWe revisit the softmax loss by looking into the decision criteria of softmax loss. In binary-class case, the posterior probabilities obtained by softmax loss are=1mu p_1 = exp(W_1^Tx+b_1)/exp(W_1^Tx+b_1) + exp(W^T_2x+b_2) =1mu p_2 = exp(W^T_2x+b_2)/exp(W_1^Tx+b_1) + exp(W_2^Tx+b_2) where x is the learned feature vector. W_i and b_i are weights and bias of last fully connected layercorresponding to class i, respectively. The predicted label will be assigned to class 1 if =2mu =2mu p_1>p_2 and class 2 if =2mu =2mu p_1<p_2. By comparing p_1 and p_2, it is clear that =2mu =2mu W^T_1x+b_1 and =2mu =2mu W^T_2x+b_2 determine the classification result. The decision boundary is =2mu =2mu (W_1 - W_2)x + b_1 - b_2 = 0. We then rewrite =2mu =2mu W^T_ix+b_i as =2mu =2mu W^T_ixcos(θ_i)+b_i where θ_i is the angle between W_i and x. Notice that if we normalize the weights and zero the biases (=2mu =2mu W_i=1, =1mu b_i=0), the posterior probabilities become =0mu =0mu p_1 =xcos(θ_1) and =0mu =0mu p_2 =xcos(θ_2). Note that p_1 and p_2 share the same x, the final result only depends on the angles θ_1 and θ_2. The decision boundary also becomes =0mu =0mu cos(θ_1)-cos(θ_2)=0 (i.e. angular bisector of vector W_1 and W_2). Although the above analysis is built on binary-calss case, it is trivial to generalize the analysis to multi-class case. During training, the modified softmax loss (=1mu W_i=1,b_i=0) encourages features from the i-th class to have smaller angle θ_i (larger cosine distance) than others, which makes angles between W_i and features a reliable metric for classification. To give a formal expression for the modified softmax loss, we first define the input feature x_i and its label y_i. The original softmax loss can be written asL=1/N∑_iL_i=1/N∑_i-log( e^f_y_i/∑_je^f_j)where f_j denotes the j-th element (j∈[1,K], K is the class number) of the class score vector f, and N is the number of training samples. In CNNs, f is usually the output of a fully connected layer W, so f_j=W_j^Tx_i+b_j and f_y_i=W_y_i^Tx_i+b_y_i where x_i, W_j, W_y_i are the i-th training sample, the j-th and y_i-th column of W respectively. We further reformulate L_i in Eq. (<ref>) asL_i= -log( e^W_y_i^Tx_i+b_y_i/∑_je^W_j^Tx_i+b_j)= -log( e^W_y_ix_icos(θ_y_i,i)+b_y_i/∑_je^W_jx_icos(θ_j,i)+b_j)in which =2mu =2mu θ_j,i(0≤θ_j,i≤π) is the angle between vector W_j and x_i. As analyzed above, we first normalize =2mu =2mu W_j=1,∀ j in each iteration and zero the biases. Then we have the modified softmax loss:L_modified=1/N∑_i-log( e^x_icos(θ_y_i,i)/∑_je^x_icos(θ_j,i))Although we can learn features with angular boundary with the modified softmax loss, these features are still not necessarily discriminative. Since we use angles as the distance metric, it is natural to incorporate angular margin to learned features in order to enhance the discrimination power. To this end, we propose a novel way to combine angular margin.§.§ Introducing Angular Margin to Softmax LossInstead of designing a new type of loss function and constructing a weighted combination with softmax loss (similar to contrastive loss) , we propose a more natural way to learn angular margin. From the previous analysis of softmax loss, we learn that decision boundaries can greatly affect the feature distribution, so our basic idea is to manipulate decision boundaries to produce angular margin. We first give a motivating binary-class example to explain how our idea works. Assume a learned feature x from class 1 is given and θ_i is the angle between x and W_i, it is known that the modified softmax loss requires =2mu =2mu cos(θ_1)>cos(θ_2) to correctly classify x. But what if we instead require =2mu =2mu cos(mθ_1)>cos(θ_2) where =2mu m≥2 is a integer in order to correctly classify x? It is essentially making the decision more stringent than previous, because we require a lower bound[The inequality =2mu =2mu cos(θ_1)>cos(mθ_1) holds while =2mu θ_1∈[0,π/m], m≥2.] of cos(θ_1) to be larger than cos(θ_2). The decision boundary for class 1 is =2mu =2mu cos(mθ_1)=cos(θ_2). Similarly, if we require =2mu =2mu cos(mθ_2)>cos(θ_1) to correctly classify features from class 2, the decision boundary for class 2 is =2mu =2mu cos(mθ_2)=cos(θ_1). Suppose all training samples are correctly classified, such decision boundaries will produce an angular margin of m-1/m+1θ_2^1 where θ_2^1 is the angle between W_1 and W_2. From angular perspective, correctly classifying x from identity 1 requires =2mu θ_1<θ_2/m, while correctly classifying x from identity 2 requires =2mu θ_2<θ_1/m. Both are more difficult than original =2mu θ_1<θ_2 and =2mu θ_2<θ_1, respectively. By directly formulating this idea into the modified softmax loss Eq. (<ref>), we haveL_ang=1/N∑_i-log( e^x_icos(mθ_y_i,i)/e^x_icos(mθ_y_i,i)+ ∑_j≠ y_ie^x_icos(θ_j,i))where θ_y_i,i has to be in the range of [0,π/m]. In order to get rid of this restriction and make it optimizable in CNNs, we expand the definition range of cos(θ_y_i,i) by generalizing it to a monotonically decreasing angle function ψ(θ_y_i,i) which should be equal to cos(θ_y_i,i) in [0,π/m]. Therefore, our proposed A-Softmax loss is formulated as:L_ang=1/N∑_i-log( e^x_iψ(θ_y_i,i)/e^x_iψ(θ_y_i,i)+ ∑_j≠ y_ie^x_icos(θ_j,i))in which we define =2mu =2mu ψ(θ_y_i,i)=(-1)^kcos(mθ_y_i,i)-2k, θ_y_i,i∈[kπ/m,(k+1)π/m] and =2mu k∈[0,m-1]. =2mu m≥1 is an integer that controls the size of angular margin. When =2mu m=1, it becomes the modified softmax loss.The justification of A-Softmax loss can also be made from decision boundary perspective. A-Softmax loss adopts different decision boundary for different class (each boundary is more stringent than the original), thus producing angular margin. The comparison of decision boundaries is given in Table <ref>. From original softmax loss to modified softmax loss, it is from optimizing inner product to optimizing angles. From modified softmax loss to A-Softmax loss, it makes the decision boundary more stringent and separated. The angular margin increases with larger m and be zero if =2mu m=1. Supervised by A-Softmax loss, CNNs learn face features with geometrically interpretable angular margin. Because A-Softmax loss requires =2mu =2mu W_i=1,b_i=0, it makes the prediction only depends on angles between the sample x and W_i. So x can be classified to the identity with smallest angle. The parameter m is added for the purpose of learning an angular margin between different identities. To facilitate gradient computation and back propagation, we replace cos(θ_j,i) and cos(mθ_y_i,i) with the expressions only containing W and x_i, which is easily done by definition of cosine and multi-angle formula (also the reason why we need m to be an integer). Without θ, we can compute derivative with respect to x and W, similar to softmax loss.§.§ Hypersphere Interpretation of A-Softmax LossA-Softmax loss has stronger requirements for a correct classification when =2mu m≥2, which generates an angular classification margin between learned features of different classes. A-Softmax loss not only imposes discriminative power to the learned features via angular margin, but also renders nice and novel hypersphere interpretation. As shown in Fig. <ref>, A-Softmax loss is equivalent to learning features that are discriminative on a hypersphere manifold, while Euclidean margin losses learn features in Euclidean space. To simplify, We take the binary case to analyze the hypersphere interpretation. Considering a sample x from class 1 and two column weights W_1,W_2, the classification rule for A-Softmax loss is =2mu =2mu cos (mθ_1)>cos(θ_2), equivalently =2mu =2mu mθ_1<θ_2. Notice that θ_1,θ_2 are equal to their corresponding arc length ω_1,ω_2[ω_i is the shortest arc length (geodesic distance) between W_i and the projected point of sample x on the unit hypersphere, while the corresponding θ_i is the angle between W_iand x.] on unit hypersphere =0mu =0mu {v_j,∀ j|∑_j v_j^2 =1, v≥ 0}. Because =2mu =2mu W_1=W_2=1, the decision replies on the arc length ω_1 and ω_2. The decision boundary is equivalent to =2mu =2mu mω_1=ω_2, and the constrained region for correctly classifying x to class 1 is =2mu =2mu mω_1<ω_2. Geometrically speaking, this is a hypercircle-like region lying on a hypersphere manifold. For example, it is a circle-like region on the unit sphere in 3D case, as illustrated in Fig. <ref>. Note that larger m leads to smaller hypercircle-like region for each class, which is an explicit discriminative constraint on a manifold. For better understanding, Fig. <ref> provides 2D and 3D visualizations. One can see that A-Softmax loss imposes arc length constraint on a unit circle in 2D case and circle-like region constraint on a unit sphere in 3D case. Our analysis shows that optimizing angles with A-Softmax loss essentially makes the learned features more discriminative on a hypersphere.§.§ Properties of A-Softmax LossA-Softmax loss defines a large angular margin learning task with adjustable difficulty. With larger m, the angular margin becomes larger, the constrained region on the manifold becomes smaller, and the corresponding learning task also becomes more difficult.We know that the larger m is, the larger angular margin A-Softmax loss constrains. There exists a minimal m that constrains the maximal intra-class angular distance to be smaller than the minimal inter-class angular distance, which can also be observed in our experiments.m_min is the minimal value such that while m>m_min, A-Softmax loss defines a learning task where the maximal intra-class angular featuredistance is constrained to be smaller than the minimal inter-class angular feature distance.[lower bound of m_min in binary-class case] In binary-class case, we have =2mu =2mu m_min≥2+√(3). We consider the space spaned by W_1 and W_2. Because =2mu m≥2, it is easy to obtain the maximal angle that class 1 spans is θ_12/m-1+θ_12/m+1 where θ_12 is the angle between W_1 and W_2. To require the maximal intra-class feature angular distance smaller than the minimal inter-class feature angular distance, we need to constrainθ_12/m-1+θ_12/m+1_max intra-class angle≤(m-1)θ_12/m+1_min inter-class angle,θ_12≤m-1/mπ 2π-θ_12/m+1+θ_12/m+1_max intra-class angle≤(m-1)θ_12/m+1_min inter-class angle,θ_12>m-1/mπAfter solving these two inequalities, we could have =2mu =2mu m_min≥2+√(3), which is a lower bound for binary case.[lower bound of m_min in multi-class case] Under the assumption that W_i,∀ i are uniformly spaced in the Euclidean space, we have m_min≥3. We consider the 2D k-class (k≥3) scenario for the lower bound. Because W_i,∀ i are uniformly spaced in the 2D Euclidean space, we have =2mu =2mu θ_i^i+1=2π/k where θ_i^i+1 is the angle between W_i and W_i+1. Since W_i,∀ i are symmetric, we only need to analyze one of them. For the i-th class (W_i), We need to constrainθ_i^i+1/m+1+θ_i-1^i/m+1_max intra-class angle≤min{(m-1)θ_i^i+1/m+1,(m-1)θ_i-1^i/m+1}_min inter-class angleAfter solving this inequality, we obtain m_min≥3, which is a lower bound for multi-class case.Based on this, we use =2mu =2mu m=4 to approximate the desired feature distribution criteria. Since the lower bounds are not necessarily tight, giving a tighter lower bound and a upper bound under certain conditions is also possible, which we leave to the future work. Experiments also show that larger m consistently works better and =2mu m=4 will usually suffice.§.§ DiscussionsWhy angular margin. First and most importantly, angular margin directly links to discriminativeness on a manifold, which intrinsically matches the prior that faces also lie on a manifold. Second, incorporating angular margin to softmax loss is actually a more natural choice. As Fig. <ref> shows, features learned by the original softmax loss have an intrinsic angular distribution. So directly combining Euclidean margin constraints with softmax loss is not reasonable. Comparison with existing losses. In deep FR task, the most popular and well-performing loss functions include contrastive loss, triplet loss and center loss. First, they only impose Euclidean margin to the learned features (w/o normalization), while ours instead directly considers angular margin which is naturally motivated. Second, both contrastive loss and triplet loss suffer from data expansion when constituting the pairs/triplets from the training set, while ours requires no sample mining and imposes discriminative constraints to the entire mini-batches (compared to contrastive and triplet loss that only affect a few representative pairs/triplets).§ EXPERIMENTS (MORE IN APPENDIX) §.§ Experimental SettingsPreprocessing. We only use standard preprocessing. The face landmarks in all images are detected by MTCNN <cit.>. The cropped faces are obtained by similarity transformation. Each pixel ([0,255]) in RGB images is normalized by subtracting 127.5 and then being divided by 128. CNNs Setup. Caffe <cit.> is used to implement A-Softmax loss and CNNs. The general framework to train and extract SphereFace features is shown in Fig. <ref>. We use residual units <cit.> in our CNN architecture. For fairness, all compared methods use the same CNN architecture (including residual units) as SphereFace. CNNs with different depths (4, 10, 20, 36, 64) are used to better evaluate our method. The specific settings for difffernt CNNs we used are given in Table <ref>. According to the analysis in Section <ref>, we usually set m as 4 in A-Softmax loss unless specified. These models are trained with batch size of 128 on four GPUs. The learning rate begins with 0.1 and is divided by 10 at the 16K, 24K iterations. The training is finished at 28K iterations.Training Data. We use publicly available web-collected training dataset CASIA-WebFace <cit.> (after excluding the images of identities appearing in testing sets) to train our CNN models. CASIA-WebFace has 494,414 face images belonging to 10,575 different individuals. These face images are horizontally flipped for data augmentation. Notice that the scale of our training data (0.49M) is relatively small, especially compared to other private datasets used in DeepFace <cit.> (4M), VGGFace <cit.> (2M) and FaceNet <cit.> (200M). Testing. We extract the deep features (SphereFace) from the output of the FC1 layer. For all experiments, the final representation of a testing face is obtained by concatenating its original face features and its horizontally flipped features. The score (metric) is computed by the cosine distance of two features. The nearest neighbor classifier and thresholding are used for face identification and verification, respectively.§.§ Exploratory Experiments Effect of m. To show that larger m leads to larger angular margin (i.e. more discriminative feature distribution on manifold), we perform a toy example with different m. We train A-Softmax loss with 6 individuals that have the most samples in CASIA-WebFace. We set the output feature dimension (FC1) as 3 and visualize the training samples in Fig. <ref>. One can observe that larger m leads to more discriminative distribution on the sphere and also larger angular margin, as expected. We also use class 1 (blue) and class 2 (dark green) to construct positive and negative pairs to evaluate the angle distribution of features from the same class and different classes. The angle distribution of positive and negative pairs (the second row of Fig. <ref>) quantitatively shows the angular margin becomes larger while m increases and every class also becomes more distinct with each other. Besides visual comparison, we also perform face recognition on LFW and YTF to evaluate the effect of m. For fair comparison, we use 64-layer CNN (Table <ref>) for all losses. Results are given in Table <ref>. One can observe that while m becomes larger, the accuracy of A-Softmax loss also becomes better, which shows thatlarger angular margin can bring stronger discrimination power.Effect of CNN architectures. We train A-Softmax loss (=2mu m=4) and original softmax loss with different number of convolution layers. Specific CNN architectures can be found in Table <ref>. From Fig. <ref>, one can observe that A-Softmax loss consistently outperforms CNNs with softmax loss (1.54%∼1.91%), indicating that A-Softmax loss is more suitable for open-set FR. Besides, the difficult learning task defined by A-Softmax loss makes full use of the superior learning capability of deeper architectures. A-Softmax loss greatly improve the verification accuracy from 98.20% to 99.42% on LFW, and from 93.4% to 95.0% on YTF. On the contrary, the improvement of deeper standard CNNs is unsatisfactory and also easily get saturated (from 96.60% to 97.75% on LFW, from 91.1% to 93.1% on YTF). §.§ Experiments on LFW and YTF LFW dataset <cit.> includes 13,233 face images from 5749 different identities, and YTF dataset <cit.> includes 3,424 videos from 1,595 different individuals. Both datasets contains faces with large variations in pose, expression and illuminations. We follow the unrestricted with labeled outside data protocol <cit.> on both datasets. The performance of SphereFace are evaluated on 6,000 face pairs from LFW and 5,000 video pairs from YTF. The results are given in Table <ref>. For contrastive loss and center loss, we follow the FR convention to form a weighted combination with softmax loss. The weights are selected via cross validation on training set. For L-Softmax <cit.>, we also use =2mu m=4. All the compared loss functions share the same 64-layer CNN architecture.Most of the existing face verification systems achieve high performance with huge training data or model ensemble. While using single model trained on publicly available dataset (CAISA-WebFace, relatively small and having noisy labels), SphereFace achieves 99.42% and 95.0% accuracies on LFW and YTF datasets. It is the current best performance trained on WebFace and considerably better than the other models trained on the same dataset. Compared with models trained on high-quality private datasets, SphereFace is still very competitive, outperforming most of the existing results in Table <ref>. One should notice that our single model performance is only worse than Google FaceNet which is trained with more than 200 million data. For fair comparison, we also implement the softmax loss, contrastive loss, center loss, triplet loss, L-Softmax loss <cit.> and train them with the same 64-layer CNN architecture as A-Softmax loss. As can be observed in Table <ref>, SphereFace consistently outperforms the features learned by all these compared losses, showing its superiority in FR tasks.§.§ Experiments on MegaFace Challenge MegaFace dataset <cit.> is a recently released testing benchmark with very challenging task to evaluate the performance of face recognition methods at the million scale of distractors. MegaFace dataset contains a gallery set and a probe set. The gallery set contains more than 1 million images from 690K different individuals. The probe set consists of two existing datasets: Facescrub <cit.> and FGNet. MegaFace has several testing scenarios including identification, verification and pose invariance under two protocols (large or small training set). The training set is viewed as small if it is less than 0.5M. We evaluate SphereFace under the small training set protocol. We adopt two testing protocols: face identification and verification. The results are given in Fig. <ref> and Tabel <ref>. Note that we use simple 3-patch feature concatenation ensemble as the final performance of SphereFace. Fig. <ref> and Tabel <ref> show that SphereFace (3 patches ensemble) beats the second best result by a large margins (4.8% for rank-1 identification rate and 6.3% for verification rate) on MegaFace benchmark under the small training dataset protocol. Compared to the models trained on large dataset (500 million for Google and 18 million for NTechLAB), our method still performs better (0.64% for id. rate and 1.4% for veri. rate). Moreover, in contrast to their sophisticated network design, we only employ typical CNN architecture supervised by A-Softamx to achieve such excellent performance. For single model SphereFace, the accuracy of face identification and verification are still 72.73% and 85.56% respectively, which already outperforms most state-of-the-art methods. For better evaluation, we also implement the softmax loss, contrastive loss, center loss, triplet loss and L-Softmax loss <cit.>. Compared to these loss functions trained with the same CNN architecture and dataset, SphereFace also shows significant and consistent improvements. These results convincingly demonstrate that the proposed SphereFace is well designed for open-set face recognition. One can also see that learning features with large inter-class angular margin can significantly improve the open-set FR performance. § CONCLUDING REMARKSThis paper presents a novel deep hypersphere embedding approach for face recognition. In specific, we propose the angular softmax loss for CNNs to learn discriminative face features (SphereFace) with angular margin. A-Softmax loss renders nice geometric interpretation by constraining learned features to be discriminative on a hypersphere manifold, which intrinsically matches the prior that faces also lie on a non-linear manifold. This connection makes A-Softmax very effective for learning face representation. Competitive results on several popular face benchmarks demonstrate the superiority and great potentials of our approach. We also believe A-Softmax loss could also benefit some other tasks like object recognition, person re-identification, etc. ieee Appendix § THE INTUITION OF REMOVING THE LAST RELUStandard CNNs usually connect ReLU to the bottom of FC1, so the learned features will only distribute in the non-negative range [0,+∞), which limits the feasible learning space (angle) for the CNNs. To address this shortcoming, both SphereFace and <cit.> first propose to remove the ReLU nonlinearity that is connected to the bottom of FC1 in SphereFace networks. Intuitively, removing the ReLU can greatly benefit the feature learning, since it provides larger feasible learning space (from angular perspective). Visualization on MNIST. Fig. <ref> shows the 2-D visualization of feature distributions in MNIST with and without the last ReLU. One can observe with ReLU the 2-D feature could only distribute in the first quadrant. Without the last ReLU, the learned feature distribution is much more reasonable.§ NORMALIZING THE WEIGHTS COULD REDUCE THE PRIOR CAUSED BY THE TRAINING DATA IMBALANCEWe have emphasized in the main paper that normalizing the weights can give better geometric interpretation. Besides this, we also justify why we want to normalize the weights from a different perspective. We find that normalizing the weights can implicitly reduce the prior brought by the training data imbalance issue (e.g., the long-tail distribution of the training data). In other words, we argue that normalizing the weights can partially address the training data imbalance problem.We have an empirical study on the relation between the sample number of each class and the 2-norm of the weights corresponding to the same class (the i-th column of W is associated to the i-th class). By computing the norm of W_i and sample number of class i with respect to each class (see Fig. <ref>), we find that the larger sample number a class has, the larger the associated norm of weights tends to be. We argue that the norm of weights W_i with respect to class i is largely determined by its sample distribution and sample number. Therefore, norm of weights W_i, ∀ i can be viewed as a learned prior hidden in training datasets. Eliminating such prior is often beneficial to face verification. This is because face verification requires to test on a dataset whose idenities can not appear in training datasets, so the prior from training dataset should not be transferred to the testing. This prior may even be harmful to face verification performance. To eliminate such prior, we normalize the norm of weights of FC2[FC2 refers to the fully connected layer in the softmax loss (or A-Softmax loss).]. § EMPIRICAL EXPERIMENT OF ZEROING OUT THE BIASES Standard CNNs usually preserve the bias term in the fully connected layers, but these bias terms make it difficult to analyze the proposed A-Softmax loss. This is because SphereFace aims to optimize the angle and produce the angular margin. With bias of FC2, the angular geometry interpretation becomes much more difficult to analyze. To facilitate the analysis, we zero out the bias of FC2 following <cit.>. By setting the bias of FC2 to zero, the A-Softmax loss has clear geometry interpretation and therefore becomes much easier to analyze. We show all the biases of FC2 from a CASIA-pretrained model in Fig. <ref>. One can observe that the most of the biases are near zero, indicating these biases are not necessarily useful for face verification.Visualization on MNIST. We visualize the 2-D feature distribution in MNIST dataset with and without bias in Fig. <ref>. One can observe that zeroing out the bias has no direct influence on the feature distribution. The features learned with and without bias can both make full use of the learning space.§ 2D VISUALIZATION OF A-SOFTMAX LOSS ON MNISTWe visualize the 2-D feature distribution on MNIST in Fig. <ref>. It is obvious that with larger m the learned features become much more discriminative due to the larger inter-class angular margin. Most importantly, the learned discriminative features also generalize really well in the testing set.§ ANGULAR FISHER SCORE FOR EVALUATING THE FEATURE DISCRIMINATIVENESS AND ABLATION STUDY ON OUR PROPOSED MODIFICATIONSWe first propose an angular Fisher score for evaluating the feature discriminativeness in angular margin feature learning. The angular Fisher score (AFS) is defined by AFS=S_w/S_bwhere the within-class scatter value is defined as S_w=∑_i∑_x_j∈ X_i(1-cos⟨ x_j,m_i⟩) and the between-class scatter value is defined as S_b=∑_in_i(1-cos⟨ m_i,m⟩). X_i is the i-th class samples, m_i is the mean vector of features from class i, m is the mean vector of the whole dataset, and n_i is the sample number of class i. In general, the lower the fisher value is, the more discriminative the features are. Next, we perform a comprehensive ablation study on all the proposed modifications: removing last ReLU, removing Biases, normalizing weights and applying A-Softmax loss. The experiments are performed using the 4-layer CNN described in Table <ref>. The models are trained on CASIA dataset and tested on LFW dataset. The setting is exactly the same as the LFW experiment in the main paper. As shown in Table <ref>, we could observe that all our modification leads to peformance improvement and our A-Softmax could greatly increase the angular feature discriminativeness.§ EXPERIMENTS ON MEGAFACE WITH DIFFERENT CONVOLUTIONAL LAYERSWe also perform the experiment on MegaFace dataset with CNN of different convolutional layers. The results in Table <ref> show that the A-Softmax loss could make best use of the network capacity. With more convolutional layers, the A-Softmax loss (i.e., SphereFace) performs better. Most notably, SphereFace with only 4 convolutional layer could peform better than the softmax loss with 64 convolutional layers, which validates the superiority of our A-Softmax loss.§ THE ANNEALING OPTIMIZATION STRATEGY FOR A-SOFTMAX LOSSThe optimization of the A-Softmax loss is similar to the L-Softmax loss <cit.>. We use an annealing optimization strategy to train the network with A-Softmax loss. To be simple, the annealing strategy is essentially supervising the newtork from an easy task (i.e., large λ) gradually to a difficult task (i.e., small λ). Specifically, we letf_y_i=λx_icos(θ_y_i)+x_iψ(θ_y_i)/1+λ and start the stochastic gradient descent initially with a very large λ (it is equivalent to optimizing the original softmax). Then we gradually reduce λ during training. Ideally λ can be gradually reduced to zero, but in practice, a small value will usually suffice. In most of our face experiments, decaying λ to 5 has already lead to impressive results. Smaller λ could potentially yield a better performance but is also more difficult to train. § DETAILS OF THE 3-PATCH ENSEMBLE STRATEGY IN MEGAFACE CHALLENGEWe adopt a common strategy to perform the 3-patch ensemble, as shown in Fig. <ref>. Although using more patches could keep increasing the performance, but considering the tradeoff between efficiency and accuracy, we use 3-patch simple concatenation ensemble (without the use of PCA). The 3 patches can be selected by cross-validation. The 3 patches we use in the paper are exactly the same as in Fig. <ref>.
http://arxiv.org/abs/1704.08063v4
{ "authors": [ "Weiyang Liu", "Yandong Wen", "Zhiding Yu", "Ming Li", "Bhiksha Raj", "Le Song" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170426113722", "title": "SphereFace: Deep Hypersphere Embedding for Face Recognition" }
Reconstruction of constant slow-roll inflation Qing Gao December 30, 2023 ==============================================* Department of Photonics Engineering, Technical University of Denmark, Ørsteds Plads 343, DK-2800 Kongens Lyngby, Denmark * Center for Nanostructured Graphene, Technical University of Denmark, Ørsteds Plads 343, DK-2800 Kongens Lyngby, Denmark * Centre for Nano Optics, University of Southern Denmark, Campusvej 55, DK-5230 Odense M, DenmarkControlling and confining light by exciting plasmons in resonant metallic nanostructures is an essential aspect of many new emerging optical technologies. Here we explore the possibility of controllably reconfiguring the intrinsic optical properties of semi-continuous gold films, by inducing permanent morphological changes with a femtosecond (fs)-pulsed laser above a critical power. Optical transmission spectroscopy measurements show a correlation between the spectra of the morphologically modified films and the wavelength, polarization, and the intensity of the laser used for alteration. In order to understand the modifications induced by the laser writing, we explore thenear-field properties of these films with electron energy-loss spectroscopy (EELS). A comparison between our experimental data and full-wave simulations on the exact film morphologies hints toward a restructuring of the intrinsic plasmonic eigenmodes of the metallic film by photothermal effects. We explain these optical changes with a simple model and demonstrate experimentally that laser writing can be used to controllably modify the optical properties of these semi-continuous films. These metal films offer an easy-to-fabricate and scalable platform for technological applications such as molecular sensing and ultra-dense data storage.§ INTRODUCTION The ability of metallic nanostructures to localize and enhance optical fields down to the nanoscale via collective electron excitations (plasmons) has been the subject of intense study in the recent decades<cit.>. Many promising applications arise from the careful engineering of nanostructures to tune their optical properties. This allows sub-diffraction light focusing for spectroscopy and sensing applications like surface-enhanced Raman scattering (SERS)<cit.>, enhancing light-matter interaction with new 2D materials<cit.>, and quantum information processing technologies<cit.>. Another application recently gaining in popularity is the tuning of a surface's spectral reflectivity by plasmonic nanostructures to produce colour images<cit.> with ultra-dense information storage<cit.>. However, due to the nanometre size scales needed these structures often require elaborate fabrication methods such as electron-beam lithography (EBL) or focused ion beam (FIB) milling<cit.>. EBL allows for precise and reproducible definition of nanostructures but at the cost of time consuming process steps to produce a mask pattern in a polymer resist<cit.>. Additionally, in most state of the art lithography systems the spatial resolution is still limited to about 10 nm<cit.>. FIB offers an alternative method for high-resolution and mask-less fabrication, but still requires long pattern writing times, and also comes with the potential problem of contaminating the structure materials with the milling ions<cit.>.Self-assembled and self-similar metallic structures where high levels of field enhancement are hosted by the naturally occurring sub-nanometre sized gaps or protrusions, are thus of great interest as they offer an alternative source of strong field localization and enhancement. Typically these structures also have fast and scalable (bottom-up) fabrication methods with few processing steps<cit.>. While these types of structures promise scalable and easily fabricated nanostructures, their specific optical properties are however limited within the scope of their assembly methods. One way to expand the range of achievable excitations is of course to influence the assembly process during fabrication<cit.>, but another way is to alter metallic and dielectric nanostructures post-assembly via controlled photothermal reshaping<cit.>. One such system, that we study here, is thin gold films near the percolation threshold subjected to morphological modifications by pulsed laser illumination. The nanostructured morphology of percolation metal films arises from the Volmer–Weber process of metal growth on dielectric substrates<cit.>. During the deposition process the metal atoms have a mutually strong interaction, while interacting less with the substrate. This leads to the formation of isolated clusters that tend to grow in the substrate plane during deposition, eventually reaching a percolation threshold where they merge to form a connected system. Further metal deposition will then serve to close up any remaining gaps in the film morphology, and eventually the system will transition into a metal on metal growth process. Depending on the deposition parameters and substrate used, it is possible to routinely fabricate large-scale areas of such metal structures where the smallest feature sizes can be at the sub-nanometre scale<cit.>. These films also hold interesting non-linear optical properties with prospects for white-light generation with modest pump powers<cit.>. However, the optical properties of these films remain to be fully investigated experimentally with dedicated spectroscopic methods. The small feature sizes of these metal films present a significant challenge for experimental exploration with conventional near-field optics. For scattering scanning near-field optical microscopes (s-SNOMs) the spatial resolution limit of the setup will still be defined by the cantilever tip size, which in the most state of the art devices is still in the order of a few nanometres<cit.>. Despite being an exceptionally high spatial resolution for photon-based characterization, the limited spatial resolution of the tip does not allow to explore the optical properties of ultra-small plasmonic gaps with dimensions on the single nanometre scale and below<cit.>. Furthermore, s-SNOM techniques are usually spectrally limited to a single or a small set of energies which is detrimental to fully characterize the plasmonic eigenmodes of these complex structures. Electron-based spectroscopic techniques such as energy-filtered electron energy-loss spectroscopy in monochromated transmission electron microscopes (TEMs) offers an attractive alternative to access the optical properties of nanometre and subnanometre sized plasmonic structures<cit.>. Indeed, the exceptional subnamometre spatial resolution and the broadband excitation spectrum of the electron beam<cit.> has recently been used to spatially map the intrinsic plasmonic eigenmodes in self-similar pristine silver films<cit.>. This method permits to identify unambiguously plasmonic hot spots that are not related to the morphology of the films in a simple way, as predicted theoretically<cit.>.Here we explore novel aspects of the optical reconfiguration of the plasmon excitations in gold films near the percolation threshold. After illumination with a fs-pulsed laser above a critical power, it is possible to induce permanent morphological modifications in the films. These changes allow for controlled changes to the film's optical properties. Using optical transmission spectroscopy, we demonstrate how it is possible to inscribe polarized and wavelength selective field enhancement into these otherwise broadly resonant films with laser writing. We explain the morphology changes of the films from laser illumination by photothermal reshaping of the metal nanoparticles in resonance with the excitation wavelength. We show that this process depends intricately on the degree of film percolation, illumination power, polarization, and wavelength of the laser. Our explanation includes three different processes: Particle spherification for elongated nanoparticles, dimer decoupling for large plasmonic gaps, and dimer welding/particle fusion for small gaps. We use hyperspectral imaging with nanometre resolution to show statistically the effect of morphological changes on the distribution of plasmonic modes in the near-field. By utilizing full-wave simulations of our explored morphologies, we highlight that the polarization dependence in the inscribed films originates from resonant elongated particles that have formed during the photothermal processes. Despite the complexity of the optical properties in these percolation films, we show that the far-field optical properties can be modified controllably through laser illumination.§ RESULTS§.§ Morphology changes and induced optical anisotropy.We have fabricated 3 samples of thin gold films on 18 nm thin SiO_2 TEM membranes (see methods). The films are of nominal thicknesses 5, 6, and 7 nm. After fabrication, a series of areas on the gold films are illuminated by scanning a fs-pulsed laser at various powers across the films, Fig. <ref>.a (see methods). If the illumination is performed above a critical level of laser power, permanent morphological changes will be induced in the films. The degree and nature of this morphology reconfiguration can be controlled based on the scan parameters used, but principally depend on the laser power or wavelength used. Fig. <ref>.b shows scanning transmission electron microscope (STEM) dark-field images of the intrinsic film morphologies, and examples of the effect of laser illumination (a more detailed morphology study is available in supplementary Fig. 1). In Fig <ref>.b we observe that the reconfiguration of the gold films is remarkably different for percolated (7 nm) and unpercolated (5 nm) samples. Indeed, high powers in the writing laser will generate more isolated nanoparticles in gold films with thicknesses below the percolation threshold. This is in stark contrast to the percolated metal film where the film is already an inter-connected network of clusters, and it remains a network of connected gold clusters even for much larger laser powers.To understand how the morphology changes are linked to changes in the plasmonic resonances in the intrinsic film, we present a simple toy model of resonant elongated particles that have their aspect ratios altered while maintaining their initial volume (see methods). We can imagine that three different scenarios will occur during the photothermal reshaping: particle shortening/spherification, particle decoupling/gap widening, and particle fusion/welding. In Fig. <ref>.c we highlight how shortening a particle while adding the 'lost' volume to either its height or width will result in a blueshift of the particle's longitudinal mode. This change in aspect ratios will correspond to the general contraction of a metal cluster into a more spherical shape due to the surface tension in the molten metal trying to minimize the particle's surface energy. In Fig. <ref>.d we show how this kind of particle contraction (lost volume in length added to height) of two coupled particles will again result in a blueshift of the system's longitudinal mode from increasing the gap distance between them. When the two particles are sufficiently contracted, their coupling will also be broken and they will start resonating like two individual particles. This kind of morphology change will correspond to how plasmonically coupled clusters that are too far apart for particle welding/fusion will become decoupled. Finally in Fig. <ref>.e we have the scenario of two metal clusters that are close enough to each other to fuse under the laser illumination. We here simulate the fusion process by progressively moving the two elongated particles closer to each other, and when they start to overlap (gap size of 0) the overlapping volume gets added to the resulting single particle's height. Both in terms of closing the gap distance and merging the two particles into one, a general redshift of the initial resonance is observed, with a strong shift at the merging point. After merging, the combined particle exhibits single particle behaviour (see Fig. <ref>.c).The effect of the laser illumination on the gold films and its morphological changes can be observed with a microscope at low magnification, as shown in Fig. <ref>.f and g. From Fig. <ref>.f and g we see a clear difference between the illuminated and the unilluminated regions of the gold films from the distinctive colours that emerges from the morphological changes in the film. Interestingly, from Fig. <ref>.f and g, we also see that the laser writing has left behind a polarization dependence in the films' optical properties, correlated with the polarization of the laser used to perform the writing, as a redder hue becomes visible for the polarization aligned with the one used in the laser writing. §.§ Optical far-field properties of optical reconfigured gold films.To investigate the emergent red colour from laser illumination in greater detail, we perform optical spectroscopy on our samples with an inverted microscope connected to a state of the art spectrometer (see methods). From the bright-field transmission spectra of Fig. <ref> (full data available in supplementary Fig. 2), when aligning the polarization of the light source in the transmission experiment parallel with the polarization of the laser used for the writing, we see a sharp decrease in transmission for photon energies in the range of 1.8–2.0 eV. When turning the polarizer to the perpendicular direction of the laser writing, this feature is strongly suppressed. On Fig. <ref>.c we show for the 5 nm sample that the position of this feature is also dependent on the wavelength used for the illumination, with longer wavelengths also producing a feature deeper in the red part of the spectrum. For the 7 nm (and 6 nm) sample this feature is less pronounced for the lower powers used, but it still becomes apparent for sufficiently high powers. This can be understood from comparing the morphologies in Fig. <ref>.b, as the thicker gold films seem generally less perturbed by the laser illumination. We explain this from the fact that the 6 and 7 nm films appear to be above the percolation threshold, and as such higher levels of laser power is needed to fully separate the particles as heat will be more efficiently conducted away from its injection point in a fully connected system<cit.>. Another observed trend is that the central position of the transmission feature tends to blueshift for increasing levels of laser power used for the writing. We attribute this to two effects. First, the individual particles become shortened for the higher powers of illumination, due to increasing spherification of the gold at higher temperatures, and secondly due to the likewise increasing thickness of the gold particles as the volume of gold that was previously covering the flat surface now accumulates into thicker particles (when we assume only minimal gold evaporation).§.§ Statistical analysis of EELS intensity distributions.We have shown that laser-induced reshaping of the metal films has a strong effect on their optical far-field properties. In order to investigate the plasmonic properties of the individual particle clusters and gaps, we recorded high resolution EELS maps of the samples. Due to the highly random nature of the film morphologies, the specific detailed distribution of the electric fields associated with the plasmon resonances is difficult to reproduce, making qualitative comparisons between samples difficult (example EELS maps are available in Fig. 3.a). Secondly, comparisons between large sets of EELS maps for such random geometries can become very overwhelming to interpret. But, due to the self-similar and isotropic nature of the films, a quantitative statistical analysis of sufficiently large regions of the films can provide a reproducible and comparable probability distribution function (PDF) of the EELS intensity of a specific film morphology. In order to construct PDFs of our EELS intensities, we take advantage of the methods of previous theoretical and experimental works on self-similar and fractal structures<cit.>. In short, for each spectral image we can extract the resonance energy and EELS intensity of the measured plasmons. By then binning the found intensities we form a distribution of intensities in the spectral image (i.e. a histogram). For each intensity value, this PDF then gives us the probability of finding this EELS intensity in the image(see methods for more details). The EELS intensity gives the probability of an electron losing a certain amount of energy along its trajectory through the sample, and it is proportional to the integral of the electric field induced in the sample along this trajectory by the electron<cit.>. As such, the EELS intensity is not itself a direct measure of the plasmonic electric field amplitude, but the two are strongly related<cit.>. We use this to justify the comparison of EELS intensity to previous measurements and calculations of the near-field intensities of semi-continuous metal films.To extract the plasmon resonant energies and peak intensities from our background-corrected spectral images, we perform an iterative series of Gaussian fits on the individual spectra. First a set of energy ranges of interest are determined from taking an average of the full spectral image, and within these energy ranges individual Gaussian functions are fitted to a smoothing spline constructed from the data. The parameters of these fits are then used for initial guesses to fit a sum of Gaussian functions that are fitted to the full range of the individual spectra's datasets (see methods for more details). An example of these sequential fits can be seen in Fig. <ref>.b. From the extracted peak central energies we can construct histograms of how the resonances are concentrated in terms of energy, and from the found peak intensities we can construct distribution functions of the EELS intensities in terms of their spectral position (Fig. <ref>.c–f, and supplementary Figs. 3-5). For all of the films we see that prior to optical reconfiguration the central energies of the detected plasmon resonances form a wide continuum with a slight peak in the distribution around 2.1–2.2 eV. The position of this peak in the distribution seems to redshift slightly for increasing film thickness. The constructed PDFs for the intrinsic films also show that the near-infrared and red part of the spectrum (1.3–1.8 eV) contributes with higher relative EELS intensities, in agreement with prior experiments and the scaling theory for semi-continuous metal films predicting higher field enhancement for longer wavelengths<cit.>. As the samples gets subjected to high laser powers, we see a dramatic redistribution of the resonance energies, as well as the intensity distributions (results for the full range of powers on the three film samples are available in supplementary Figs. 3–5). The effect is especially pronounced for the 5 nm sample, Fig. <ref>.c and e, where increasing levels of power of illumination gradually reshapes the PDFs from the normal distribution expected for these kinds of films, towards a scaling power-law of isolated dipoles<cit.>. We explain this from the morphology images in Fig. <ref>.b and our toy model in Fig. <ref>.c–e. For the high laser powers we see the inter-particle gaps increase significantly, and the resulting morphology consists mainly of isolated particles. As a result, the inter-particle coupling present in the intrinsic films has been lifted. This isolating and the reshaping into thicker and more spherical particles also explains why we generally see a strong blueshifting in the resonance energies in Fig. <ref>.c. For the 7 nm sample we see a distinctly different morphology and resonance behaviour, compared to the 5 nm sample. For all levels of laser power the 7 nm sample remains a connected structure forming large networked clusters, where the small gaps that previously dominated the morphology either have fused or opened fully. The removal of these small features and their replacement by large connected clusters can help us to understand why the illuminated film seems to have a large increase in red and near-infrared modes in Fig. <ref>.d. From the fact that the structures also never become truly isolated, we can understand why the EELS intensity distributions for the illuminated parts do not deviate as dramatically in their shape from the intrinsic PDFs, as in the 5 nm case. Interestingly, we are able to see a large increase in the relative EELS intensities from the ∼1.9 eV part of the spectrum in the illuminated parts of the 5 nm film, when compared with the intrinsic films. This increase in intensity also seems to become larger for increasing powers of laser illumination, and it is worth noting that the energy corresponds to the reduction in transmission observed for the parallel polarization for the 5 nm sample in Fig. <ref>.a.§.§ Polarization dependence.In order to qualitatively investigate the polarization dependence of the extinction features seen in the transmission spectra of Fig. <ref>, we have constructed EELS maps for the 5 nm film samples by integrating the EEL spectra in the 1.8–2.0 eV energy range, as seen in Fig. <ref> (full energy ranges are available in supplementary Figs. 7–10). From these maps several elongated particles appear to host dipole-like plasmon resonances, that are predominantly aligned with the direction of the polarization of the laser used to induce the optical reconfiguration. To verify if these particles are dominated by a dipolar response in this energy range, we performed finite-element simulations of plane wave excitations with two orthogonal polarizations on the same film morphologies (see methods). A comparison between the simulated field distributions and the measured EELS intensities can be seen in Fig. <ref>. Because the plane wave simulations allow us to make comparisons to the two excitation polarizations individually, we can now see how the measured plasmon resonances from the EELS data are decoupled in terms of polarization. With this we show that the suspected elongated particles are in fact behaving like strongly polarized nanorods with resonances in the same energy range as the features measured in the transmission experiments. A one-to-one comparison of EELS and plane-waves is usually difficult due to the inherent differences of the excitation sources<cit.>. The electron beam in EELS is able to excite plasmon modes that are normally inaccessible with optical fields<cit.>, and is also able to excite all different polarizations simultaneously<cit.>. However, here we are only interested in identifying the bright modes around 1.9 eV that are optically active in our transmission measurements. We expect from this comparison to identify the polarization dependence of the plasmonic modes in the reshaped films (Fig. <ref>).§ DISCUSSIONIn summary, we have characterized the morphological and plasmonic restructuring of semi-con­tinuous gold films of nominal thicknesses 5, 6, and 7 nm when subjected to fs-laser pulses of different powers (Fig. <ref> and supplementary Fig. 1). This kind of optical reconfiguration becomes apparent in the far-field properties of the films, as a resonant feature is seen in the transmission spectra from the laser illuminated areas of the films. When the samples are illuminated with a polarized light source that is aligned to the polarization of the laser used for the optical reconfiguration, a strong decrease in transmission is observed. Furthermore, the spectral position of this transmission feature can be controlled by varying the laser wavelength and power when performing the optical reconfiguration (Fig. <ref>). By measuring EELS maps from selected regions of the altered and intrinsic parts of the samples, we have constructed histograms of the central energies of the plasmon resonances present in the films, as well as PDFs of the EELS intensity distributions for different resonance energies (Fig. <ref> and supplementary Figs. 2–4). From these we conclude that generally the resonances at longer wavelengths have higher EELS intensities in the intrinsic films, while in the altered films we observe a redistribution of the resonances that contribute to the largest EELS intensities. The intensities in reconfigured films are also generally higher overall, when compared to the intrinsic films. Finally, we have investigated the origin of the polarization dependence of the features observed in the transmission experiments, by plotting EELS intensity maps of energies related to the measured decrease in transmissions (Fig. <ref>). We have performed finite-element simulations to reconstruct the electric field distributions in our sample morphologies from polarized far-field excitation. We compare these to the measured EELS intensities to confirm that the resonant particles formed after laser illumination are responsible for the polarization dependence of the measured transmission spectra (Fig. <ref>).These types of metallic films have previously been demonstrated to function as SERS sub­strates<cit.>. We show here that it is possible to convert the broad ensemble of different plasmonic resonances found in these films into a more narrow band of resonances, making the films more selectively resonant for a specific wavelength and polarization of light. This could potentially be used to further increase the enhancement factor from such films in SERS applications, by tuning and enhancing the resonances towards the wavelength and polarization of the laser used in the Raman experiment. New sensing applications could also harvest the potential of these kinds of metal films.Finally, we have shown that by using a different laser wavelength or power when performing the optical reconfiguration it is possible to tune the wavelength of the resulting resonant particles created in the film. This could be useful for plasmonic colour printing by laser illumination, as it would be possible to inscribe pixels of different colours into the films by use of different wavelengths and powers, constructing a colour image that would be visible clearly when viewing the film through a correctly aligned polarizer. Experiments have already been performed demonstrating such concepts<cit.>, but rely on using specifically fabricated substrate structures. The ease with which these metal films can be fabricated, even at very large-scale areas, could offer an alternative low-cost substrate structure for emerging plasmonic colour printing technologies, as well as applications in ultra-dense data storage media<cit.>.§ METHODS§.§ Fabrication.Thin gold films of 5, 6, and 7 nm nominal thicknesses were deposited onto 18 nm thick SiO_2 TEM membranes from Ted Pella, Inc. using an electron beam deposition system. The gold was deposited with a constant rate of 2 Å/s, with the total deposition time defining the thickness of the final film. Chamber pressure was maintained at ∼10^-5 mbar and deposition was on room temperature substrates.§.§ Optical reconfiguration.The laser illumination is performed with a two-photon photoluminescence (TPL) setup, consisting of a scanning optical microscope in reflection geometry built based on a commercial microscope with a computer-controlled translation stage. The linearly polarized light beam from a mode-locked pulsed (pulse duration ∼200 fs, repetition rate ∼80 MHz) Ti-Sapphire laser (wavelength λ = 730-–860 nm, δλ ∼10 nm, average power ∼300 mW) is used as an illumination source at the fundamental harmonic (FH) frequency. After passing an optical isolator (to suppress back-reflection), half-wave plate, polarizer, red colour filter and wavelength selective beam splitter, the laser beam is focused on the sample surface at normal incidence with a Mitutoyo infinity-corrected long working distance objective (100×, NA = 0.70).The FH resolution at full-width-half-maximum is ∼0.75 m. The half-wave plate and polarizer allows accurate adjustment of the incident power. The laser illumination was done with the following scan parameters: Integration time (at one point) of 50 ms, speed of scanning (between the measurement points) of 20 m s^-1, and scanning step size of 350 nm. The incident power was within the range 0.5–2.0 mW. Unless stated otherwise, the excitation wavelength was fixed at 740 nm. We should mention that with this laser power, morphology changes could only be induced in the pulsing regime of the laser.§.§ Optical spectroscopy.Bright-field transmission spectra were recorded from the intrinsic and laser illuminated regions of the samples on the TEM membranes. A custom spectroscopy setup built from a Nikon Eclipse Ti-U Inverted microscope was used. The system is fitted with a halogen white-light source with peak emission at 675 nm. The light is collected by a CFI S Plan Fluor ELWD objective from Nikon (60×, NA = 0.70), and the spectra are recorded using a Shamrock 303i Spectrometer equipped with a Newton 970 EMCCD. A LPVISE200-A visible linear polarizer from Thorlabs is placed between the light source and sample stage, allowing for polarized illumination. Spectra are collected for polarizations perpendicular and parallel to the optically induced changes in the gold films. The transmission spectra are corrected for the spectral profile of the halogen lamp by recording a reference spectrum through a similar glass slide as the membranes are mounted on, for both polarizer positions. After dividing the intensities point by point with the reference, the final spectrum is obtained by averaging over the individual laser modified areas (∼20 pixels on the CCD) and normalizing with the maximum value for comparison.§.§ STEM and EELS measurements.The EELS measurements were performed with a FEI Titan TEM equipped with a monochromator and a probe aberration corrector. The microscope was operated in STEM mode at an acceleration voltage of 120 kV, with a probe diameter of 0.5 nm and with zero-loss peak (ZLP) full-width-half-maximums of ∼0.2 eV. For spectral images, 500×500 nm^2 areas were scanned with 5–8 nm step sizes, using larger step sizes for the films illuminated at greater laser powers. Before imaging, the samples were cleaned in an O_2-plasma for 45 s.From the width of the ZLP, its central position is determined and the energy scale of the spectra is shifted to zero for this position. The spectra are then normalized by integrating the background signal found in the higher energies of the spectrum. After normalization the ZLP is subtracted by performing a series of power law fits, and the fit producing the minimum residual is picked. The full background corrected spectral image is averaged together, and a sum of Gaussian functions are fitted to the now emerging 'primary modes' of the image, i.e. the modes so sufficiently common or intense that they contribute overwhelmingly to the total average of the spectral image. A smoothing spline is then constructed for each individual spectra's data. These smoothing splines are then segmented within the full-width-half-maximum of the Gaussian functions fitted to the 'primary modes', and within each of these data segments a Gaussian function is fitted. The quality of these individual fits are then evaluated to determine if a peak is present in this energy range of the spectrum or not. From each segment of the spectrum, we now get the information if a peak is present, and if so, what its amplitude and central position is. Using these values for initial guesses, a sum of Gaussian functions are then fitted to the full range of the spectrum's dataset and the plasmon resonance energies and EELS intensities are extracted from these fits.§.§ Statistical analysis.Using the central position of the the Gaussian fits described above, a histogram can be constructed of identified central energies, with bin sizes of 0.01 eV, matching the energy resolution of the TEM's detector. The PDFs of discretized windows of the energy range are constructed from the corresponding amplitudes of the Gaussian fits with central energies in those windows. Each energy window is 0.10 eV wide, and to construct the PDF the fitted amplitudes are binned into 30 equally wide bins of EELS intensities. We have estimated a base level intensity as the median of the 50 lowest identified amplitudes from each of the spectral images for each sample. All the identified intensities for each energy window are then normalized with respect to this base level intensity to be represented in terms of a relative EELS intensity.§.§ Simulations. Full-wave 3D simulations of the gold structures were carried out using the finite-element software package COMSOL Multiphysics (v5.1). The simulation domain was truncated with perfectly matched layers from all sides. The system was solved in a scattered wave formulation, with analytical solution for three layer system (air-glass-air) as the background field. A custom Python script was used, which generated the simulation geometry from the outlines that were extracted from the binaries of STEM dark-field images (see supplementary methods). The particles were represented as straight prisms in the 3D geometry. The relative thicknesses, t_r, of the gold films have been calculated by t_r=log(I_tot/I_ZLP), where I_ZLP is the integral of the ZLP in the EEL spectrum, and I_tot is the total integral of the EEL spectrum, ranging from 0–17 eV. Calculating this for each pixel in our spectral images, and performing a plane background correction to this, provided the relative height map for the samples. By averaging the height map values inside the particle contours, the relative heights of the particles were obtained. The tallest particles were set to a height of 28 nm, as this provides the best correlation to the EELS measurements. An example of obtained particle contours and their relative heights are shown in supplementary Fig. 6.Simulations for Fig. <ref>.c–e are based on a similar setup. One or two nanorods are placed on the glass membrane. Calculations for Fig. <ref>.c are done with a single nanorod that is composed of a rectangular prism (width w, height h, length l) with semi-cylindrical caps (radius w/2) at both ends. A series of calculations was done, for various lengths of the particle. For each particle size an absorption spectrum was calculated and the corresponding resonance peak was found. The particle size was varied to model the melting process, but the particle volume (V=h·(w l + 2 π w^2/4)) was kept constant either by scaling the particle height (h) or particle width (w).For Fig. <ref>.d we simulated two nanorods, with fixed centre positions. Consequently, as the particle length was reduced the gap between the nanorods increased. Particle height was scaled to keep the volume of the particles constant as they decreased in length.For Fig. <ref>.e the particle length was kept constant but the distance between the two particles varied. For positive gap sizes the particle shape stayed constant, but for negative gap sizes (particle overlap) the particle height was increased to keep the volume of the merged particle equal to the volume of the two initial particles. § REFERENCESnaturemag Acknowledgments The authors gratefully acknowledge financial support from CONACyT Basic Scientific Research Grant 250719, and the Danish Council for Independent Research–Natural Sciences (Project 1323-00087). S. I. B. acknowledges the European Research Council, Grant 341054 (PLAQNAP). N. A. M. is a Villum Investigator supported by Villum Fonden, Grant 1401500. Center for Nano Optics was financially supported from the University of Southern Denmark (SDU 2020 funding), while Center for Nanostructured Graphene (CNG) was funded by the Danish National Research Foundation (CoE Project DNRF103). T. R. acknowledges support from Archimedes Foundation (Kristjan Jaak scholarship) and Villum Fonden (DarkSILD project).C. F. would like to personally thank Søren Raza, Kåre Wedel Jacobsen, and Johan Rosenkrantz Maack for many valuable discussions during this work. Author contributions N. S. and C. F. performed the fabrication and EELS measurements, and C. F. performed the EELS and image data analysis. T. R. performed the numerical simulations. M. G. performed the transmission measurements and data analysis. S. N. and J. B. performed the laser illumination of samples. A. L., S. X., S. I. B., N. A. M., and N. S. supervised the project. C. F. drafted the manuscript, and all authors contributed to its writing. Conflict of interests The authors declare no competing financial interests. CorrespondenceCorrespondence and requests for materials should be addressed to C. F. (email: [email protected]) or N. S. (email: [email protected]). 1
http://arxiv.org/abs/1704.08550v1
{ "authors": [ "Christian Frydendahl", "Taavi Repän", "Mathias Geisler", "Sergey M. Novikov", "Jonas Beermann", "Andrei Lavrinenko", "Sanshui Xiao", "Sergey I. Bozhevolnyi", "N. Asger Mortensen", "Nicolas Stenger" ], "categories": [ "physics.optics", "cond-mat.other" ], "primary_category": "physics.optics", "published": "20170427131803", "title": "Optical reconfiguration and polarization control in semi-continuous gold films close to the percolation threshold" }
-1.5cm -1.5cm 3cm +3cm|cesare.bracco, carlotta.giannelli, [email protected]| 1 2 1 3 1 4 1 6 1 9 1 12 1 18 1 24 1 48 1 72 1 963 2 4 3 5 6 116 5 12ḍ r̊ h N height 3pt width 0.35in depth -2.5pt I-.2emR I-.2emP ℂ I-.2emNAMS Subject Classifications:f e
http://arxiv.org/abs/1704.08507v1
{ "authors": [ "Cesare Bracco", "Carlotta Giannelli", "Alessandra Sestini" ], "categories": [ "math.NA" ], "primary_category": "math.NA", "published": "20170427111053", "title": "Adaptive scattered data fitting by extension of local approximations to hierarchical splines" }
=1 A
http://arxiv.org/abs/1704.08277v2
{ "authors": [ "Mohamed M. Anber", "Loïc Vincent-Genod" ], "categories": [ "hep-th", "hep-lat", "hep-ph" ], "primary_category": "hep-th", "published": "20170426181715", "title": "Classification of compactified $su(N_c)$ gauge theories with fermions in all representations" }
Department of Physics, Ben-Gurion University of the Negev, Beer Sheva 84105, Israel Department of Physics, Ben-Gurion University of the Negev, Beer Sheva 84105, Israel The Ilse Katz Institute for Nanoscale Science and Technology, Ben-Gurion University of the Negev, Beer Sheva 84105, IsraelThe interplay of almost degenerate levels in quantum dots and molecular junctions with possibly different couplings to the reservoirs has lead to manyobservable phenomena, such as the Fano effect, transmission phase slips and the SU(4) Kondo effect.Here we predict a dramatic repeated disappearance and reemergence of the SU(4) and anomalous SU(2)Kondo effects with increasing gate voltage. This phenomenon is attributed to the level occupation switching which has been previously invoked to explain the universal transmission phase slips in the conductance through a quantum dot. We useanalytical arguments and numerical renormalization group calculations to explain the observations and discuss their experimental relevance and dependence on the physical parameters.Abrupt disappearance and reemergence of the SU(2) and SU(4) Kondo effects due to population inversion Yigal Meir December 30, 2023 =====================================================================================================The coexistence of spin-degenerate levels with different couplings to the leads is ubiquitous in quantum dots (QDs), quantum wires and molecular junctions. It has been pointed out early on that such coexistence may develop in deformed QDs <cit.>, with important consequences on the relation between the conductance and the transmission phase of consecutive Coulomb-blockade (CB) peaks. Later on it has been demonstrated <cit.> that such a coexistence is, in fact, a generic effect in interacting QDs. In fact, the interplay between levels with weak coupling to the leads and a strongly coupled one has been invoked <cit.> to explain the intriguing experimental observations <cit.> ofsharp drops in the transmission phase through a QD between CB peaks. These drops have been attributed to "level occupation switching" (LOS) - the abrupt emptying of the strongly coupled level and the filling of a corresponding weakly coupled level, or vice versa, as the gate voltage is continuously varied. These studies have been further supported by a direct observation <cit.> of the Fano effect, resulting from the interference between a wide and a narrow level in a single quantum dot. Simultaneous transport through several molecular levels has also been demonstrated <cit.> in molecular junctions, and the interplay of weakly and strongly coupled levels has been predicted <cit.> to lead to observable effects in the CB peak structures.In a seemingly different context, the coexistence of almost degenerate levels has been argued <cit.> to give rise toSU(4) Kondophysics <cit.>, which has indeed been observed in carbon nanotubes <cit.>, in atoms <cit.> and in single <cit.> and double <cit.> semiconductor quantum dots. In all these systems, the degenerate levels are not necessarily coupled equally to the leads <cit.>. However, in spite of the plethora of studies of the physics of LOS in QDs on one hand, and of SU(4) Kondophysics in such systems on the other hand, the interplay of these two effects has not been addressed so far. In this letter we predict dramatic abrupt suppression and reentrance of the Kondo effect due to LOS. We present numerical renormalization group (NRG) calculations, backed up by analytical arguments, and show that in the presence of two spin-degenerate levels, with very different couplings to the leads, then as the gate voltage is varied (Fig. 1), the enhanced conductance due to the Kondo effect is abruptly suppressed, only to likewise abruptly reemerge at higher gate voltages. This disappearance and reemergence may occur more than once. Below we elaborate on the physics behind this effect, on its dependence on temperature, the ratio of the couplings of the two levels to the leads, on their energy difference and other physical parameters. The Hamiltonian that describes the two-level QD is given byH_QD=∑_i σϵ_in̂_i σ + ∑_i U_i n̂_i↑n̂_i↓ + U_12n̂_1 n̂_2where i=1,2 denotes the level index, n̂_i σ=d^†_i σ d_i σ, n̂_i=∑_σn̂_i σ (d^†_i σ creates an electron on the dot in level i with spin σ), and spin-degeneracy has been assumed (i.e. no magnetic field). We will first concentrate on the fourfold degenerate case, ϵ_i=ϵ and U_1=U_2=U_12=U. Each one of the levels couples to a different linear combination of states in the leads, which, for simplicity, we assume to be orthogonal. The resulting Hamiltonian is then given byH = H_QD + ∑_i σ k∈ L,Rϵ_ikc^†_i σ k c_i σ k+ ∑_i σ k∈ L,R(V_ikd^†_i σ c_i σ k+ h.c) ,where c^†_i σ k creates an electron with spin σ in the leads inthe momentum state k that couples to level i in the dot. Again, for simplicity,the tunneling amplitude is chosen to be momentum (and spin) independent, but different between the two levels,V_i k=V_i. With this separationthe calculation of the linear response current can proceed separately for each channel using the Meir-Wingreen formula <cit.>, in terms of the spectral function of each level. The spectral function and the expectation values are calculated using a density matrix numerical renormalization group (DM-NRG) procedure [We used the open-access Budapest Flexible DM-NRG code, http://www.phy.bme.hu/dmnrg/; O. Legeza, C. P. Moca, A. I. Toth, I. Weymann, G. Zarand, arXiv:0809.3143 (2008) (unpublished)]. Assuming equal couplings to the left and right leads, and equal density of states ρ in the two leads, both levels only couple to a specific superposition of the left and right lead wavefunctions, and effectively, one needs to solve a single-lead problem per level, characterized by the respective couplings of the two levels, Γ_i=πρ V_i^2. We assume a constant ρ, with a symmetric band around the Fermi energy, with bandwidth D. In the following we set D to be the unit of energy. Fig. <ref>a depicts the conductance as a function of the chemical potential (gate voltage), for different values of Γ_2/Γ_1. Each Γ_i defines an effective SU(2) Kondo temperature T_K^(i). When Γ_2=Γ_1, one reproduces the standard SU(4)-symmetric Anderson model conductance plot: the conductance G rises from zero to G=2e^2/h (where e is the electron charge and h the Planck constant), then to G=4e^2/h, in agreement with the Friedel sum rule, G = e^2/h ∑_iσsin^2(π n_iσ) (where n_iσ=<n̂_iσ>), which is accurate at such low temperatures. However, when Γ_2 is reduced, LOS starts to take place, resulting in several conductance dips near the mid point. For example, the curve for Γ_2=0.3 Γ_1exhibits a small peak near the first switching event (at μ-ϵ≃ 0.35), and a higher peak near the second switching event (at μ-ϵ≃ 0.6), then decreases towards zero, only to abruptly rise again to its unitarity value (G=4e^2/h). The effect is even more dramatic for smaller Γ_2 where T_K^(2) < T. In this regime, the conductance rises and plateaus at its Kondo value, G=2 e^2/h, only to drop sharply to almost zero at a specific value of the chemical potential, slightly above μ=+U/2. Then the conductance remains at zero, goes through a narrow peak (around μ=+U), of a peculiar shape (see below),and eventually rises sharply again to around the Kondo value below the mid-point μ=ϵ+3U/2. Since the model is symmetric around that point, the same behavior is reflected around μ=ϵ+3U/2.Fig. <ref>b depicts how this effect depends on temperature, for the case Γ_2=0.At high temperature, T>>T_K^(1), one reproduces the CB peak structure. Note that in spite of only one level being coupled to the leads, there are 4 CB peaks, all of similar width, indicating that in each case transport is through the strongly coupled level <cit.>. However, for smaller temperatures, switching events lead to enhancement of the conductance by the Kondo effect, but only in specific regions of the chemical potential, giving rise to the sharp drops in the conductance mentioned above.This peculiar behavior of the conductance can be understood by combining the Friedel sum rule,with the physics of LOS. Fig. 2a depicts the occupationsof the two levels, for Γ_2=0, at the lowest temperature of Fig. 1b, and the resulting conductance, using the Friedel sum rule for the Γ_2=0 case: G = e^2/h ∑_σsin^2(π n_1σ).The physics of LOS is relatively well understood <cit.>. Consider, for example,the limit of Γ_2=0. When μ lies between the first two CB peaks, there is a competition between two configurations: the partially occupied wide level and the fully occupied narrow level. Due to tunneling (Γ_1), the energy of the former is reduced by an electron process (e-process) – tunneling of the electron in that level to the leads, and by a hole process (h-process) – tunneling of an electron from the leads into the dot, making it doubly occupied. On the other hand the second configuration energy is reduced by lead electrons of either spin tunneling into the empty wide level, i.e. twice the h-process. As μ crosses the symmetry point μ=+U/2, the reduction in energy due to the h-process is larger than that of the e-process. As a result of this, it is eventually energetically favorable to occupy the narrow level instead of the wide level, and there is a LOS event. This is depicted in Fig. <ref>a, where we plot the occupations of the two levels as a function of chemical potential. We see that similarly to the occupation switching event described above, occurring at μ∼0.33D for the parameters used, there are several more switching events for similar reasons. As electrical current flows mainly through the strongly coupled level, a sudden switch in its occupation will lead to an abrupt change in the conductance, in accordance with the Friedel sum rule. Indeed, given the occupations, the conductance calculated using the Friedel sum rule (Fig. <ref>a) agrees perfectly with the direct calculation of the conductance (Fig. <ref>). Similar arguments can be applied to the regime where transport occurs through both levels, e.g. the curve Γ_2=0.3 Γ_1 in Fig. <ref>, where both Kondo temperatures obey T_K^(i)≫ T.In this case, when the occupations switch from around (n_1,n_2)=(1,0) to around (0,1), there is a switch from the Kondo effect due to level 1 to that due to level 2 (Fig. <ref>b), resulting in a small hump in the conductance, visible inFig. <ref>a. On the other hand, when the occupations switch from (2,0) to (1,1) there is an abrupt jump in the conductance from almost zero to the coexisting Kondo value of G=4e^2/h. The crossover from SU(4) physics to SU(2) physics with decreasing Γ_2/Γ_1 is manifested in the reduction of T_kfrom T_k^SU(4) to T_k^SU(2) (inset of Fig. <ref>a).It is interesting to note the unusual shape of the conductance peak at μ=+U (μ≃0.6 in Fig. <ref>), where the narrow level abruptly empties. Consider first the case Γ_2=0. As the chemical potential approaches the valueμ=+U, the wide level starts to be gradually filled, its occupation, and as a result, the total conductance, rises as a tail of a Lorentzian of width Γ_1. At the switching event the occupation of the wide level jumps to almost 2, and the conductance start decreasing, again as a tail of a Lorentzian of width Γ_1. Thus, the line shape of this peak, in the case of Γ_2=0, will consist of a cusp formed by the intersection of the tails of two shifted Lorentzians. For a finite Γ_2, the LOS events will occur on this scale, and we expect an additional narrow Lorentzian of width Γ_2 on top of the line shape described above.Unlike the perturbative calculation <cit.>, which predicts the LOS event exactly at the midpoint between the CB peaks, μ=+U/2, in the NRG calculation the switching occurs at a higher chemical potential, which shifts to even higher μ as Γ_1 increases, as can be seen in Fig <ref>. In fact, for values of Γ_1 larger than ≃ U/10, the anomalous CB peak at μ=+U turns into a dip. For such large Γ_1 the LOS events are at μ=+U and μ=+2U, where the narrow level becomes occupied by one and two electrons, respectively. Between these points the occupation of the wide level rises continuously from n_1=1/2to n_1=3/2, and drops sharply back ton_1=1/2at the switching point.Thus, in this regime, we find another atypical situation: three consecutive wide Kondo peaks, corresponding to the three possible occupation states of the narrow level.The results we have shown so far were for the fully degenerate case, _1=_2 and U_12=U_1=U_2. Fig. <ref> depicts the conductance as one varies Δ≡_2-_1, or U_12/U (where U_1=U_2=U were still equal). As Δ increases from zero (Fig. <ref>a), the switching point between the first two CB peaks shifts to higher chemical potential, until it reaches the second CB peak and disappears, producing a seemingly standard single-level conductance plot in the Kondo regime. As one expects, at this value of Δϵ, for μ≳+U, as the dot becomes doubly occupied (both electrons occupy the wide level), there should be no Kondo effect. However, as can be seen in Fig. <ref>a, there is reentrance into the Kondo regime at higher chemical potential (e.g. μ=1 for Δ=0.0075), again to disappear and to reemerge again (e.g. at μ=1.6 for same Δ). These reentrances into the Kondo regime occur exactly where the LOS occur: whenever the narrow level gets occupied by an additional electron, the energy of the strongly coupled level shifts up, and its occupation is reduced to below double occupation, leadingto reappearance of the Kondo effect <cit.>.When Δ is negative (not shown) one finds the mirror image of the positive-Δ chemical-potential dependence, as now the narrow level is preferentially filled. Thus, even when breaking the energy degeneracy, one still findsthe abrupt transitions and the reentrances that one observes in the degenerate case.Similarly, when U_12/U is reduced from unity the LOS still persists, though the first switching event shifts to lower chemical potential.This makes the first plateau (where n_2=0) narrower and the regions where n_2=1 wider. For smaller U_12/U, When the second plateau becomes wide enough and the second n_2=0 region disappears entirely, the conductance exhibits Kondo peaks that are again abruptly suppressed and then reemerge as the narrow level becomes occupied, with each peak corresponding to a different value of n_2=0,1,2.The observation of the various predictions made in this paper can be checked in different physical setups. One such system would be a single quantum dot, where the physics of level occupation switching has already been demonstrated by the universality in the transmission phase and its abrupt drop between Coulomb-blockade peaks <cit.>. In such a system, since both levels occupy the same dot, one expects U_1≃ U_2 ≃ U_12. Thus if one can reach a regime where the Kondo temperatures associated with the two levels obey T_K^(1)≫ T ≫ T_K^(2), then one should observe the physics described in the this paper. Experimentally, by increasing the coupling of the quantum dot to the leads, the π/2 phase shift associated with Kondo effect has indeed been observed <cit.>, indicating that T_K^(1)> T. However, in that regime no abrupt phase drops have been observed, indicating that in this regime the conditionT > T_K^(2) has not been met. In order to fulfill this latter condition, one mayeither tune the temperature to that regime, or select a narrow level of a smaller Γ_2. Another relevant example is transport through nanotube quantum dots <cit.>, whereeach orbital level is 4-fold degenerate. Again in this setup one expects, for the same reason, that the Coulomb energies will be of the same order of magnitude. By appropriate application of a magnetic field and gate voltage one can tune the system to have simultaneous transport through two different orbital states, with different coupling to the leads. If the Zeeman splitting is much larger than the coupling of the levels to the leads, then this system will display theorbital SU(2) Kondo effect, as has been observed inRef.nygard2000. On the other hand, if the width of the strongly coupled level is larger than the Zeeman splitting, one may observe the abrupt disappearance and reemrgence of the anomalous SU(4) physics detailed in this paper. Another relevant physical system is thedouble quantum dot system that was utilized to observe the SU(4) Kondo effect <cit.>. In this system the two separate quantum dots play the role of the two levels in our theory. Experimentally, one can use gate voltages to tune the QDs energies and couplings to the leads, i.e. the parameters Γ_i and _i. Thus, if one tunes to the already observed SU(4) fixed point, and then gradually reduce the ratio Γ_2/Γ_1, we predict a gradual transition from SU(4) Kondoto SU(2) Kondo behavior and the eventual emergence of the abrupt suppression and reemergence of the Kondo peak, as detailed in Fig.<ref>. One problem that may be relevant to the two-dot setup is that the inter-dot Coulomb energy U_12 is typically smaller than the U_i, the intra-dot one. In principle, in order to achieve degeneracy, one may tune the difference in energies of the two dots, to compensate for the difference in the Coulomb energy. This physics will be explored elsewhere.To conclude - we have presented physical arguments and numerical-renormalization-group calculations that demonstrate a dramatic suppression and then reemergence of the SU(4)/SU(2) Kondo effect in quantum dots that contains two spin-degenerate levels, with very different couplings to the leads. Since this has been claimed to be a generic phenomenon in quantum dots and molecular junctions, we expect our results to have a wide range of applicability. In particular, the experiments which have already observed SU(4) Kondo effect, either in carbon nanotube quantum dots, or in semiconductorquantum dots, could be employed to study the physical regime discussed in this paper, and to critically check our predictions.We thank P. Moca and G. Zarand for scientific discussions and help with the DM-NRG code. YM acknowledges support from ISF grant 292/15.
http://arxiv.org/abs/1704.08271v1
{ "authors": [ "Yaakov Kleeorin", "Yigal Meir" ], "categories": [ "cond-mat.str-el" ], "primary_category": "cond-mat.str-el", "published": "20170426180541", "title": "Abrupt disappearance and reemergence of the SU(2) and SU(4) Kondo effects due to population inversion" }
]On the modelling of shallow turbidity flowsV. Liapidevskii]Valery Yu. Liapidevskii V. Liapidevskii: Novosibirsk State University and Lavrentyev Institute of Hydrodynamics, Siberian Branch of RAS, 15 Av. Lavrentyev, 630090 Novosibirsk, Russia [email protected] https://www.researchgate.net/profile/V_Liapidevskii/D. Dutykh]Denys Dutykh^* D. Dutykh: LAMA, UMR 5127 CNRS, Université Savoie Mont Blanc, Campus Scientifique, F-73376 Le Bourget-du-Lac Cedex, France and Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LAMA, 73000 Chambéry, France [email protected] http://www.denys-dutykh.com/ ^* Corresponding authorM. Gisclon]Marguerite Gisclon M. Gisclon: Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LAMA, 73000 Chambéry, France [email protected] http://www.lama.univ-savoie.fr/ gisclon/ emptyValery LiapidevskiiLavrentyev Institute of Hydrodynamics, Novosibirsk, RussiaNovosibirsk State University, Novosibirsk, Russia Denys DutykhCNRS–LAMA, Université Savoie Mont Blanc, France Marguerite GisclonLAMA, Université Savoie Mont Blanc, France[t]1.0On the modelling of shallow turbidity flows emptyLast modified: December 30, 2023[ [   =====emptyIn this study we investigate shallow turbidity density currents and underflows from mechanical point of view. We propose a simple hyperbolic model for such flows. On one hand, our model is based on very basic conservation principles. On the other hand, the turbulent nature of the flow is also taken into account through the energy dissipation mechanism. Moreover, the mixing with the pure water along with sediments entrainment and deposition processes are considered, which makes the problem dynamically interesting. One of the main advantages of our model is that it requires the specification of only two modeling parameters — the rate of turbulent dissipation and the rate of the pure water entrainment. Consequently, the resulting model turns out to be very simple and self-consistent. This model is validated against several experimental data and several special classes of solutions (such as travelling, self-similar and steady) are constructed. Unsteady simulations show that some special solutions are realized as asymptotic long time states of dynamic trajectories.: turbidity currents; density flows; shallow water flows; conservation laws; finite volumes; travelling waves; self-similar solutionsMSC: [2010] 76T10 (primary), 76T30, 65N08 (secondary) PACS: [2010] 47.35.Bb (primary), 47.55.Hd, 47.11.Df (secondary)empty§ INTRODUCTION Underwater turbidity currents are sediment-laden underflows that play an important rôle in the morphology of the continental shelves (more generally of ocean bottoms) and in the global sediment cycle going to the formation of hydrocarbon reservoirs. We refer to <cit.> for a self-contained and comprehensive account of the theory of gravity currents and intrusions. The presence and entrainment of sediments differentiates them from stratified flows due to, temperature or salinity differences. The main physical mechanisms include the deposition, erosion and dispersion of important amounts of heavy sediment particles. Turbidity currents are not to be confused with debris flows, which represent fast-moving masses of poorly sorted heterogeneous material where interactions among the material pieces (≈ particles) are important. Moreover, debris mix little with the ambient fluid. Debris flows have been a mainstream topic in the scientific literature due to their hazard they wreak in mountain regions (and not only).The driving force is the gravity acceleration acting on dispersed sediment particles along steep and moderate bottom slopes. The initial perturbation is amplified by this acceleration, which in turn destabilizes the flow into shear instabilities that result in turbulent mixing and the transfer of mass and momentum. This gravity force creates the horizontal pressure gradient due to the increase of hydrostatic pressure resulting from the addition of particles. The heavy sediment particles are suspended in the mixing layer by fluid turbulence. The studied here processes are responsible of the transfer of littoral sediments to deep ocean regions. One should not disregard the destructive potential of gravity currents onto underwater structures such as pipelines, cables, Turbidity currents in submarine canyons can attain surprisingly high velocities of the order of 8 ∼ 14 / <cit.>. These high velocities in the downstream direction result from the self-acceleration (and self-suspension) process from an appropriate initial perturbation, when more and more sediments are entrained by the flow from the bed, thus, increasing the rate of work performed by gravity <cit.>. This process is sometimes referred to as the “ignition” <cit.>, which translates the energy imbalance property of such flows. One of important scientific questions is to determine the conditions necessarily to have an igniting flow. However, the self-acceleration stage cannot continue indefinitely. Most often the bed slope drops off (due to the bed morphology) or, simply, the sediment supply ceases. The mechanism of ignition was already described in Pantin (1979) <cit.>. However, the first laboratory demonstration of self-accelerated turbidity flows took 30 more years <cit.>.Turbidity currents is a particular case of (continuously) stratified flows and they are fundamentally different from classical density underflows <cit.>. The main difference comes from the fact that the source of the density gradient, the suspended sediment, is not conservative. The suspended sediments are free to exchange with the core layer near the sea bed. The ambient still water is also entrained into this process. These exchanges are difficult to quantify and they constitute one of the main difficulties in the modeling of such flows <cit.>. In this respect turbidity currents are fundamentally non-conservative flows in their nature. Gravity flows may occur in the atmosphere[For instance, downslope windstorms over topography in Colorado (US) were observed and examined in <cit.>.] over topography, sub-aerial (avalanches, pyroclastic flows) and sub-aqueous environments (turbidity currents) over bathymetries. They may result also from anthropogenic activities such as when a dense buoyant industrial effluent or pollutant is released into a lake, river or ocean. In the present study we shall consider mainly sub-aqueous flows due to the abundance of available experimental data. We refer to <cit.> as general excellent reviews on this topic.Perhaps, the first serious attempts to observe turbidity currents in natural environments were performed in late 1960's at Scripps Canyon offshore of La Jolla, California. They were reported in <cit.>. However, the flows reported in that study were so violent that the instrumentation was lost during these density currents making the detailed analysis extremely difficult <cit.>. The exact time moment of these underwater events is unpredictable which make them difficult to monitor in natural environments. Most of our physical knowledge on underwater turbidity currents come from small scale laboratory experiments <cit.>. The experiments are bound to use common liquids for practical reasons. In general, it is not possible to respect all scalings. To give an example, we can mention the issue with particle sizes and their settling velocity. Nevertheless, taking into account the difficulties in obtaining field data, laboratory experiments are the only source of quantitative data about turbidity currents. The mathematical modelling is needed to extrapolate these experimental results to the scales on which these processes occur in nature. Nonetheless, the experiments offer a great opportunity for the verification of numerical results.The gravity current can be divided geometrically into the flow head, body and tail. The head is shaped as an ellipse and, generally the head is higher than the flow body. In the present study we are mainly interested in the flow head modelling, where the most intensive mixing processes take place. Consequently, it influences the whole flow dynamics. The most advanced point of the flow head is called the front or nose.The main difficulties in understanding the dynamics of gravity turbidity currents come from their genuinely turbulent nature. Moreover, the phenomenon is nonlinear, heterogeneous and unsteady. The flow complexity increases when the flow entrains more and more sediments in suspension. The literature devoted to the mathematical modeling of the density currents is abundant. First of all, we would like to mention the classical monographs on this subject <cit.>. The first and simplest models intended to explain the classical lock-exchange configurations was proposed in <cit.>. These models are referred to as integral, box or 0D models, since all quantities are averaged in space. The modern approaches to the mathematical modeling of such flows were initiated in <cit.>. A dense cloud 0D model for powder-snow avalanches including non-Boussinesq and sediment entrainment effects along the avalanche path was proposed in <cit.>. Powder-snow avalanches are large-scale, finite volume release turbidity currents (in the form of large scale suspension clouds) occurring on mountain slopes. These clouds sometimes reach 100 in height and the front velocities of the order of 100 /. Without sediments (snow in the case of avalanches) distributed over the incline, the density current first accelerates and then decelerates without reaching important velocities. With sediments entrainment, the current can be maintained in the accelerating self-sustaining state during sufficient intervals of time to reach the velocities indicated above. In <cit.> a fair correlation of the avalanche velocity with the snow cover was demonstrated. The measurements of an avalanche front velocity in the Sion valley, Switzerland demonstrate a constant increase of the front velocity with traveled distance <cit.> (during the accelerating phase, of course). Thus, we come to the conclusion that the inclusion of sediments entrainment effect is of capital importance to predict the correct density current front velocity.Some of recent studies devoted to the sediments transport within depth-averaged models include <cit.>. This list is far from being exhaustive. The shallow water approach assumes that vertical accelerations are negligible, so the pressure being essentially hydrostatic. The sediment concentration is a passive tracer with exchanges among different layers. The flow is fully turbulent, even if pure viscous effects are generally negligible. Moreover, the energy required to keep the sediments in the suspension cloud is a negligible portion of the total turbulent energy production <cit.>. Thus, the base model has to be first Reynolds-averaged <cit.> before applying the long wave approximation. Several authors made an effort to take into account the turbulence modeling into the shallow water type models <cit.>. Our approach to solve this issue will be detailed below. Nowadays, the multi-layer approaches to the density stratified flows become more and more popular <cit.>. Finally, some researchers chose a more CFD[Computational Fluid Dynamics (CFD).]-like approach to the simulation of density flows incorporating eventually the advanced turbulence modeling <cit.>. Perhaps, the first Direct Numerical Simulation (DNS) of the gravity current dates back to the years of 2 000 <cit.>. These simulations have an advantage of being depth-resolving and, thus, providing very a complete information about the flow structure in two or even three dimensions. However, due to the high computational complexity, only idealized academic configurations can be considered within reasonable CPU-time at the current state of technologies. Recently proposed three-dimensional (3D) turbidity-current models can be found in <cit.>. Moreover, the 3D DNS computations are often limited in the bulk Reynolds number.In the present study we adopt a simplified (1.5D) approach along the lines of <cit.> based on the Eulerian formulation and depth-averaged formulations. A Lagrangian simplified BANG1D model was proposed in <cit.>. A simple 1.5D model was proposed in <cit.>. The authors parametrized their model by making the entrainment velocity depending on the dimensionless Richardson number. In the present study we close the model in an alternative way.Very similar physical processes take place in powder-snow avalanches where the snow particles suspension flows down the mountains and the snow plays the rôle of sediments in underflows <cit.>. Consequently, very similar mathematical models appear in these two fields and to make the bibliography review more complete we mention some recent results in powder-snow avalanche modeling <cit.>.The goal here is to propose a simple multi-layer (just two or three layers) shallow water-type model which takes into account mixing processes between the layers. We assume that the sediment particles are well mixed across the height of each layer. So that their volume fraction can be effectively approximated by depth-averaged quantities. This model preferably has to be simple enough to be studied using even analytical methods. The main scientific question which is currently poorly understood is the influence of flow stratification on global flow patterns. For instance, the formation of current's head (or the front region) and its steady-state velocity have to be carefully explained <cit.>.The present study is organized as follows. In Section <ref> we state the physical and mathematical problem under consideration. In Section <ref> we study the proposed model analytically and in Section <ref> we validate its predictions by comparing them against experimental data. Some unsteady simulations and their relation to analytical (self-similar) solutions are presented in Section <ref> as well. Finally, the main conclusions and perspectives of this study are outlined in Section <ref>.§ MATHEMATICAL MODELConsider an incompressible liquid which fills a two-dimensional fluid domain Ω. A Cartesian coordinate system O, = (x, y) is chosen in a classical way such that the axis Oy points vertically upward and abscissa O x is positive along the right horizontal direction. The fluid is bounded below by a solid non-erodible bottom y = d (x). Above, the fluid can be assumed unbounded for the sake of simplicity, since our attention will be focused on processes taking place in the region close to the bottom.The fluid is inhomogeneous and the flow can be conventionally divided in three parts. On the solid bottom there is a heavy fluid layer with density ρ_ 0 composed mainly of sedimentary deposits. Its thickness is ζ (x, t). Above, we have a muddle layer whose density will be denoted by (, t) composed of the sediments and the still water mix. Its thickness is h (x, t). Finally, the whole domain above these two layers is filled with the still ambient water of the density ρ_ a > 0 at rest (_̆ a ≡). This situation is schematically depicted in Figure <ref>. We include also into consideration the situation where a thin motionless layer of sediments with constant thickness ζ_ s and density ρ_ s > 0 covers the slope. It is usually the case in many practical situations and these sediment deposits may contribute to the flow head dynamics while propagating along the incline. One may imagine also that a certain mass of a sediment suspension or dense fluid is released into the flow domain at x = 0 with the mass flux ρ_ 0·ζ (0, t)· u (0, t).After applying the standard long wave scaling (or, equivalently, the depth-averaging), one can derive the following system of equations (<cit.>):ζ_ t + [ ζ w ]_ x =χ^ - , h_ t + [ h u ]_ x =χ^ + , w_ t + w w_ x + [ b_ 0 ζ + b h ]_ x =-b_ 0 d_ x - u_ ∗^ 2/ζ , b_ t + u b_ x =-b_ 0 χ^ - + b χ^ +/h , u_ t + u u_ x + b [ ζ + h ]_ x +h b_ x = - w χ^ - + u χ^ +/h - b d_ x , q_ t + u q_ x =(2 h q)^ -1[ (2 w u - w^ 2 + b_ 0 h - 2 b h) χ^ - + (u^ 2 - q^ 2 - b h) χ^ + -] .The sense of the variables is explained in Table <ref> and also in Figure <ref>. For the dissipative term, we assume the following closure relation:κ q^ 3 , κ > 0 . In the sequel we are going to make the following choice of the entrainment rates χ^ ± (unless the contrary is stated explicitly):χ^- ≡ 0 , χ^+ ≡ σ q .Moreover, we assume that the gravity force balances exactly the turbulent friction at the solid bottom,u_ ∗^ 2/ζ ≡ -b_ 0 d_ x ,which implies that the kinematic Equation (<ref>) is completely decoupled from the rest of the system (<ref>) – (<ref>). The information about ζ is transported along characteristics of this equation. So, if ζ is constant initially and this constant value is maintained at the channel inflow, ζ will remain so under the system dynamics. We will adopt this assumption as well in order to focus our attention on the mixing layer dynamics. Below we will consider only the middle mixing layer. Under these conditions, the equilibrium model becomes:h_ t + [ h u ]_x =σ q , b_ t + ub_ x =-σqb/h , u_ t + u u_ x + bh_ x + hb_ x =-σqu/h - bd_ x , q_ t + u q_ x =σ u^ 2 - q^ 2 - hb - δ q^ 2/2 h ,where the constant δ is defined as: δκ/σ > 0 .The system (<ref>) – (<ref>) can be equivalently recast in the conservative form which has an advantage to be valid for discontinuous solutions as well <cit.>:h_ t + [ h u ]_ x =σ q , (h b)_ t + [ h b u ]_ x =0 , (h u)_ t + [ h u^ 2 + bh^ 2 ]_ x =-h b d_ x ,(h (u^ 2 + q^ 2 + h b))_ t + [ (u^ 2 + q^ 2 + 2 h b) h u ]_ x =-2 h b u d_ x - κ q^ 3 .Here, κ ∈ ^ + is a positive constant measuring the rate of turbulent dissipation <cit.>.An important property of this model is the absence of the bottom friction term. This physical effect was taken into account, while excluding the bottom layer of thickness ζ(x, t). This gives us the mathematical reason for the absence of a friction term in model (<ref>) – (<ref>). Similarly, we can bring also a physical argument to support this fact. The horizontal velocity takes the maximum value on the boundary between the bottom sediment and mixing layers. Consequently, the Reynolds stress τ_∗ vanishes here. In fact, we can derive an additional balance law by combining together Equations (<ref>) and (<ref>):(h q)_t + [ h q u ]_x =σ (u^2 + (1-δ) q^2 - h b) .The last equation is not independent from Equations (<ref>) – (<ref>). Consequently, it does not bring new information about the equilibrium sedimentation-free model (<ref>) – (<ref>). Nevertheless, we provide it here for the sake of the exposition completeness. The proposed model has an advantage of being simple, almost physically self-consistent[Only two constants σ and δ need to be prescribed to close the system.] and having the hyperbolic structure. It was derived for the first time in <cit.>. In order to obtain a well-posed problem the system (<ref>) – (<ref>) has to be completed by corresponding boundary and initial conditions. In the following sections, the just proposed equilibrium system (<ref>) – (<ref>) will be studied in more details by analytical and numerical means.§ ANALYTICAL STUDY OF THE MODELIn this Section we discuss the steady state solutions and investigate the qualitative behaviour of two important special classes of solutions — travelling waves and similarity solutions. §.§ Mixing layer formationMixing layers are formed between two fluid layers having different densities. The initial stages of mixing processes play a very important rôle in natural and laboratory environments. The physical mechanism of a mixing layer formation is given by various interfacial instabilities (Rayleigh–Taylor or Kelvin–Helmholtz). The most important mechanism is, of course, the Kelvin–Helmholtz instability <cit.> because of the shear velocity presence <cit.>. In supercritical flows the mixing intensity is considerably intensified. In this way, mixing layers are the most pronounced in transcritical flows over underwater obstacles (bathymetric features), since a jet is formed on the obstacle downstream (lee) side making the flow supercritical. The process of mixing layer formation over an inclined bottom was studied in <cit.> in the framework of a three-layer model. In the present study we apply a similar approach to the two-layer[We remind that the upper layer of the still water is assumed to be motionless in the present study.] System (<ref>) – (<ref>). Below we derive a steady solution, which provides the boundary conditions for the unsteady propagation of the flow head (see Figure <ref>).Stationary solutions to System (<ref>) – (<ref>) satisfy the following system of differential equations:w ζ_ x + ζ w_ x= χ^ - , u h_ x + h u_ x= χ^ + , w w_ x + b_ 0 ζ_ x + b h_ x + h b_ x= -b_ 0 d_ x - τ_ ∗/ζ , u u_ x + b (ζ_ x + h_ x) +h b_ x= - b d_ x - w χ^ - + u χ^ +/h , h u b_ x= -b_ 0 χ^ - - b χ^ + , 2 q h u q_ x= (2 w u - w^ 2 + b_ 0 h - 2 b h) χ^ - + (u^ 2 - q^ 2 - b h) χ^ + - κ q^ 3 .The last system of equations can be seen as a quasilinear system with respect to spatial derivatives of unknown variables (h_ x, u_ x, b_ x, ζ_ x, w_ x, q_ x). The determinant Δ (x) of this system can be easily computed:Δ (x) = (u^ 2 - b h)·(w^ 2 - b_ 0 ζ) - b^ 2 ζ h .By assuming that Δ (x) ≠ 0, we obtain:h_ x= a_ 1· b h + a_ 2·(w^ 2 - b_ 0 ζ)/Δ (x) , u_ x= χ^ + - u h_ x/h ,(h_x ) b_ x= - b_ 0 χ^ - + b χ^ +/h u ,ζ_ x= - 1/b ( h b_ x + w χ^ - + u χ^ +/h + u u_ x) - h_ x - d_ x , w_ x= χ^ - - w ζ_ x/ζ , q_ x= (2 w u - w^ 2 + b_ 0 h - 2 b h) χ^ - + (u^ 2 - q^ 2 - b h) χ^ + - κ q^ 3/2 q h u ,where the coefficients a_ 1, 2 are defined asa_ 1 χ^ - w + b_ 0 ζ d_ x + τ_∗ , a_ 2 χ^ + u + b h d_ x + w χ^ - + u χ^ + .The last system of equations can be seen as an Initial Value Problem (IVP) in the spatial variable x, if all spatial derivatives in the right-hand side are replaced by their expressions (we do not make this operation to keep the shorthand notation). We illustrate below the behaviour of solutions to System (<ref>) – (<ref>) on the example of the mixing layer formation problem over an inclined bottom. The real data to build this solution were taken from <cit.>.§.§.§ Problem statement and solution In this Section we formulate the IVP inspired by the experimental study <cit.>. Consider a density current over the flat plane inclined with angle ϕ with respect to the horizontal direction. Without any loss of generality, we postulate that the mixing layer starts to form at some location x = x_ 0, where we set the initial conditions for the steady System (<ref>) – (<ref>). Moreover, we adopt the following closure: χ^ + = 2 σ q , χ^ - = - σ q .We assume that in this point we have a supercritical flow with velocity w_ 0, height ζ_ 0 and density ρ_ 0. The upper layer of still water with density ρ_ a is motionless. Thus, we havew_ 0^ 2 > b_ 0 ζ_ 0 ,and b_ 0 is defined as b_ 0(ρ_ 0 - ρ_ a) g/ρ_ a .The initial height of the mixing layer is h_ 0 = 0 by our assumption on point x_ 0. The initial asymptotic stages of the mixing layer formation as x → x_ 0 can be determined from the condition that the right hand sides in (<ref>) – (<ref>) are bounded <cit.>:b_ L = b_ 0 ,u_ L = w_ 0 ,q_ L = w_ 0/√(4 + 2 δ) .Under the condition ζ > 0, we shall have b (x) ≡ b_ L = b_ 0. This can be seen after substituting the closure relations for χ^ ± from Equation (<ref>) into Equation (<ref>). One readily obtains thatb_ x = σ q (b_ 0 - 2 b)/h u .From the condition b_ L = b_ 0, it follows that b (x) ≡ b_ 0.By taking into account the last asymptotics for the solution, we can construct it using standard numerical means until the point x_ 1, where the mixing layer will touch the bottom, the point where ζ(x_ 1) ≡ 0. The solution to system (<ref>) – (<ref>) taking into account the aforementioned asymptotics is depicted in Figure <ref>. We used the following set of physical parameters in our computations:σ = 0.15 , δ = 6 , ϕ = 10.8^ ∘ ,x_ 0 = 8 𝖼𝗆 ,ζ_ 0 = 7 𝖼𝗆 ,w_ 0 = 5 𝖼𝗆/𝗌 ,b_ 0 = 1.4 𝖼𝗆/𝗌^ 2 .The flow geometry along with the initial conditions were taken from <cit.>. On the upper panel of Figure <ref> we show the comparison with experimental data provided by Armi & Pawlak (2000) <cit.>, who employed the Laser Induced Fluorescence (LIF) and Digital Particle Imaging Velocimetry (DPIV) together with a concentration of Rhodamine dye for flow visualization. Color code corresponds to the density gradient (blue is lower, red is higher). Their results show that instabilities evolve in an asymmetrical manner. In Figure <ref> with a dashed line we show the numerical prediction given by function y = ζ (x) (lower line) and y = ζ (x) + h (x) (upper line). The reasonable agreement between the experimental data and our numerical solution at the initial parts of the mixing layer validate the approximate model (at least for steady solutions). We would like to mention that in the experiment (as well as in the nature), there is a slight backward flow above the mixing layer. The typical velocities are of the order 0.5 ∼ 1 /. In our model this effect is not taken into account. It could be done by including the third (upper) layer into consideration <cit.>. On the left and right sub-plots in upper Figure <ref>, we show the experimental distribution of the velocity (solid line) and of the density (dashed line) at the beginning and at the end of the considered fluid domain.The theoretical lower bound of the mixing layer coincides fairly well with the experimental estimation of the high density gradient area as it can be seen in Figure <ref> (upper panel). On the other hand, the theoretical upper bound goes sufficiently higher than the coloured region measured experimentally. This little discrepancy comes from the fact that the experimental data report the high density gradient, while our model predicts the upper bound of large vortices appearing during nonlinear stages of the Kelvin–Helmholtz (KH) instability development <cit.>. This situation is described in more details in <cit.>. The experimental evidence for the KH mechanism is shown in <cit.>. In particular, it is shown there that the mixing layer growth into the still water ish_ x ≡ hx = 2 σ ≡ 0.3().In the same time, the effective thickness of the mixing layergrows according the computed velocity profile ash_ effx ≈ 0.18 .This value corresponds much better to numerous experimental findings <cit.>. Consequently, we may conclude that in the considered case of the stratified fluid flow down the incline, the effective height[We remind here the definition of the effective height of the mixing layer: it is the fluid body bounded from above and below by virtual surfaces where w = 0.95 × w_ 0 and w = 0.1 × w_ 0 correspondingly. The asymmetry in this definition comes from the nonlinear form of the vertical velocity profile.] of the mixing layer will be smaller than the height of the computed turbulent layer.In Figure <ref> (lower panel) we show with dotted lines the (theoretically predicted) distribution of velocities u (x) and w (x) in the mixing layer. The slope rupture in these curves is visible. It happens since at x_ c ≈ 26 the determinant (<ref>) vanishes,Δ (x_ c) ≡ 0 .Physically speaking it means that the flow remains everywhere supercritical. However, for small changes of the flow parameters, it is possible that hydraulic jumps will appear along with following local subcritical zones. We note also that internal hydraulic jumps on the downstream side of an underwater obstacle is an important feature of stratified flows (both in the ocean and in the atmosphere). The inclusion of the mixing layer formation before the hydraulic jump is of capital importance in such situations.§.§.§ Sediment layer When the mixing layer approaches the lower boundary with sediment deposits, the mass entrainment from the lowest layer decelerates and stops completely. In the framework of our model, this effect could be realized by ensuring the transition from χ^ - = - σ q towards χ^ - = 0 (and, correspondingly, from χ^ + = 2 σ q towards χ^ + = σ q). Physically it means that the maximum of the flow velocity is achieved somewhere near the boundary between the mixing layer and bottom layer of constant density ρ_ 0. The transition to χ^ - = 0 takes place in neighbourhood of the right boundary of the flow represented in Figure <ref>.§.§.§ Intermediate conclusions The mixing layer structure over a downhill determines the flow structure further down. In particular, the total mass flux (relative to fluid density ρ_ a) _ 0 = b_ 0 ζ_ 0 w_ 0 is divided into two parts. Namely, this splitting takes place at the point of transition of the mixing layer into the turbulent jet and the undiluted bottom layer, at x = x_ 1.The turbulent jet receives the buoyancy flux _ j = b_ 1 h_ 1 u_ 1, while the bottom layer takes _ b = b_ 0 ζ_ 1 w_ 1. We notice also that for the slope angle ϕ = 10.8^ ∘ and experimental facilities considered in Figure <ref>, the dominant part of the mass flow goes into the turbulent jet, _ j ≫ _ b.If we assume that the flow velocity at x = x_ 1 in the bottom layer stabilizes[Physically it happens when the gravity force projection along the slope is balanced by the friction force with the rigid bottom.] due to small thickness of the layer, in the bottom layer we have w ≡ w_ b, ζ ≡ ζ_ b and the density in the layer does not change anymore. Hence, the bottom layer becomes completely passive in the flow dynamics. But in the same time it influences the flow head propagation velocity for moderate bottom slopes[For large inclines the physical mechanisms are slightly different.]. This point will be demonstrated below. §.§ Steady flowsFrom numerical and experimental points of view, a significant number of studies has been devoted to the study of the transient gravity current problem with special focus on the density front formation and evolution <cit.>. The steady flow configuration has received less attention <cit.>. However, when one has a dynamical system in hands, it is natural to begin its study by looking for equilibrium points <cit.>. Thus, the study of a mathematical model is not complete if we do not discuss this class of solutions. Moreover, stationary solutions may be realized and observed on certain time scales in laboratory experiments, if certain conditions are met and maintained during sufficiently long time. For instance, the buoyant flow entering the inclined channel has to be maintained at constant rate during the whole experiment. If permanent boundary conditions are maintained not only in the mixing layer, but also in the bottom layer as well, then, the lower sediment layer will play a passive rôle in the flow,ζ_ b =,w_ b =, ρ_ 0 =.In other words, the friction and mixing between these two layers do not take place. It is not difficult to see that fluid flows of the formu ≡ u_ j ,m = b h ≡ m_ j ,q ≡ q_ jsatisfy Equations (<ref>) – (<ref>) provided that the mixing layer thickness depends linearly on the coordinate x along the channel,h (x) = h_ 0 + ς·(x - x_ 0) .Here, quantities with the sub-script _j denote values in the steady jet. The variable m will be sometimes referred as the mass, however, physically it represents the excess of the fluid column weight with respect to the still water level due to the presence of heavy sediment suspensions. To satisfy the system (<ref>) – (<ref>), the following identities have to be satisfied:ς· u_ j= σ· q_ j ,ς·(u_ j^ 2 +m_ j)= α· m_ j , ς·(u_ j^ 2 + q_ j^ 2 + 2 m_ j)· u_ j= 2 α· m_ j· u_ j - κ q_ j^ 3 ,where α- d_ x ≥ 0. An algebraic consequence of relations (<ref>) – (<ref>) above can be easily derived:u_ j^ 2 - m_ j - (1 + δ) q_ j^ 2 = 0 .The boundary conditions specify also the buoyancy influx _ j into the channel:_ jm_ j· u_ j = U_ j^ 3 ⟹U_ j = √(_ j) .All the relations above can be combined into a single equation for the quantity _ ju_ jU_ j using simple algebraic transformations (see also <ref>):√(1 - _ j^ -3/1 + δ) (1 + 2 _ j^ 3) = 2 α/σ , σ ≠ 0 .There exists a unique solution to this equation for any positive right hand side. Moreover, one can show that _ j > 1. It corresponds to the supercritical flow with u_ j^ 2 > m_ j. Then, once we determined _ j, from Equations (<ref>) – (<ref>) we can determine the remaining quantities u_ j, m_ j, q_ j and ς.To make a conclusion, in turbidity gravity flows in an inclined channel, where the horizontal (depth-integrated) velocity maximum is achieved on the boundary between the sediment and turbulent mixing layers, the model (<ref>) – (<ref>) predicts the stationary supercritical flow with pure fluid entrainment from the upper layer. §.§ Travelling wavesOne of the main questions in the modelling of turbidity flows is to determine the density front velocity. The proposed base model (<ref>) – (<ref>) is sufficiently simple to address this question analytically. §.§.§ General considerationsIn this Section we describe the class of travelling wave solutions to System (<ref>) – (<ref>) in their generality. Since we are mainly interested in smooth solutions, we will use the non-conservative form (<ref>) – (<ref>) for the sake of convenience. Also, we assume that the bottom slope α is constant, which is necessary for the existence of solutions with permanent shape.The travelling wave ansatz takes the following form:h (x, t) = h (ξ) ,m (x, t) = m (ξ) ,u (x, t) = u (ξ) ,q (x, t) = q (ξ) ,where ξx - c· t and c is a positive[If the wave travels to the rightwards direction, as we assume without any loss of generality.] constant, which has the physical sense of the travelling wave speed (to be determined later). After substituting this ansatz into Equations (<ref>) – (<ref>), we obtain a system of Ordinary Differential Equations (ODEs):(u - c) h^ ' + hu^ '=σq , (u - c) m^ ' + m u^ '=0 , (u - c) u^ ' + 1/2 m^ ' + m/2h h^ '=α m - σqu/h , (u - c) q^ '=σ(u^ 2 - (1 + δ)q^ 2 - m)/2 h ,where the prime ^' denotes the differentiation operation with respect to ξ. This system of ODEs describes a transcritical gravity flow in a coordinate system which moves with velocity c. More precisely, the flow is of supercritical type in the avalanche core and it switches to the subcritical regime when we cross the wave front. By analogy with the detonation theory <cit.>, the front velocity c (under certain conditions) is given by the Chapman–Jouguet principle which states that a transcritical front propagates with the minimal admissible velocity <cit.>.After some computations the ODEs System (<ref>) – (<ref>) can be reduced to a single differential equation with implicit dependence on ξ:qu = σ/2 (u^2 - (1 + δ) q^2 - m)^ (q, u)·((u - c)^2 - m)^ (u)/(u - c)^2·(α m - σq·(u + m/2 (u - c)))_ (q, u) ,where the expression for m (ξ) in terms of u (ξ) is obtained by integrating Equation (<ref>):m (ξ) = -(c)/u (ξ) - c .The integration “constant” (c) can be determined from the Rankine–Hugoniot conditions written at the wave front <cit.>.§.§.§ Stability of equilibriaIn this Section we study the existence and stability of equilibria points to Equation <ref>. One of the difficulties is that the right hand side depends on a free parameter δ, whose variation has to be taken into account. The wave celerity c > u has to be specified as well.As the first step, we rewrite Equation (<ref>) as a dynamical system in the plane (q, u) with ξ being the evolution variable:q^ '= σ(q, u)(u)_ q (q, u) , u^ '= 2 (u - c)^2(q, u)_ u (q, u) .The equilibria points might be of two kinds: * on the intersection of curves Γ_ { (q, u) = 0} and Γ_ { (q, u) = 0},* on the intersection of { (u) = 0} and { (q, u) = 0}.The linear stability of equilibria is determined by eigenvalues of the following Jacobian matrix:(q, u; δ, c)[ _ qq _ qu; _ uq _ uu ] .The elements of the Jacobian matrix can be computed by direct differentiation:_1 1= -2 σ (1 + δ) q(u) ,_1 2= σ [ [ 2 u - /(u - c)^2 ] + [ 2 (u - c) - /(u - c)^2 ](q, u) ] ,_2 1= -σ (u - c)^2 [ 2 u - /(u - c)^2 ] ,_2 2= 4 (u - c)(q, u) + 2 (α- σ q(u)) .The last expression can be simplified at equilibria locations by taking into account the fact that =≡ 0 at the first kind and =≡ 0 at equilibria states of the second kind.Eigenvalues λ_1, 2 (δ) of the Jacobian matrixas functions of the parameter δ for both types of equilibria points are shown in Figure <ref>. The physical parameters used in this numerical computation are σ = 0.15, α = tanϕ = 1.0, c = 2.787 and = 0.1 c. From this illustration it is clear now that, at least for these values of parameters, the equilibria of the first kind are unstable spiral points, while the equilibria of the second kind are (linearly) stable spirals <cit.>.§.§.§ A particular class of travelling wavesSimilarly to the construction of steady solutions presented in the previous Section <ref>, we are looking for travelling wave solutions to system (<ref>) – (<ref>) of the form:h (t, x) = h (ξ) ,u (t, x) = u (ξ) ,m (t, x) = m (ξ) ,where ξ ≡ x - c· t is a combined independent variable introduced earlier. In other words, we consider the frame of reference where the travelling wave is steady. Notice that the model (<ref>) – (<ref>) does not possess the Galilean invariance property because of the velocity variables present in the right hand sides. This effect comes from the assumption that the ambient fluid remains in rest, which privileges this particular frame of reference. The constant c > 0 is the unknown wave celerity to be determined during the solution procedure.Consider travelling waves of the following particular form:u (ξ) = u_ f ,m (ξ) = m_ f ,q (ξ) = q_ f ,h (ξ) = h_ f - ς ξ ,with ς > 0 and ξ < 0. The travelling wave ansatz presented above satisfies the following relations:ς·(c - u_ f) = σ· q_ f ,ς·((c - u_ f) u_ f -m_ f) = α· m_ f ,ς·((c - u_ f)·(u_ f^ 2 + q_ f^ 2 + m_ f) - m_ f· u_ f) = 2 α m_ f· u_ f - κ· q_ f^ 3 .We use also an additional hypothesis that the flow in the coordinate frame moving with the travelling wave is critical,(u_ f - c)^ 2 = m_ f .The last condition can be equivalently recast as_ fc - u_ f/√(m_ f) ≡ 1 ,where _ f is the Froude number with respect to the wave front <cit.>. This condition ensures the existence of the self-sustained regime of the wave propagation independently of small perturbations which might occur behind the wave front. It is completely analogous to the so-called Chapman–Jouguet condition for the propagation of a self-sustained detonation wave in gas dynamics <cit.>.The wave celerity c might be eliminated from Equations (<ref>) – (<ref>) by introducing new variables:u^ ⋆u_ f/c ,q^ ⋆q_ f/c ,m^ ⋆m_ f/c^2 .As a result, we come to the following closed system of equations:ς·(1 - u^ ⋆)= σ· q^ ⋆ ,ς·(1 - u^ ⋆) u^ ⋆ -ς· m^ ⋆= α· m^ ⋆ ,ς·((u^ ⋆)^2 + (q^ ⋆)^2 + m^ ⋆) - ς· m^ ⋆· u^ ⋆= 2 α m^ ⋆· u^ ⋆ - κ·(q^ ⋆)^3 , (1 - u^ ⋆)^2= m^ ⋆ .By assuming that σ· q^ ⋆ ≠ 0, from last equations we can derive a simple relation:(u^ ⋆)^2 - m^ ⋆ - (1 + δ)·(q^ ⋆)^2 = 0 ,and by using the scaled version (<ref>) of relation (<ref>), we obtain that2 u^ ⋆ - 1 = (1 + δ)·(q^ ⋆)^2 ,or formally,q^ ⋆ = √(2 u^ ⋆ - 1/1 + δ) ,provided that u^ ⋆ > so that the value q^ ⋆ ∈ ^ +. After dividing (<ref>) by (<ref>) we have:(1 - u^ ⋆) u^ ⋆ -m^ ⋆/1 - u^ ⋆ ≡ 1/2 (3 u^ ⋆ - 1) = α √(1 + δ) (1 - u^ ⋆)^2/σ √(2 u^ ⋆ - 1) .In other words, we have the following equation for u^ ⋆:3 u^ ⋆ - 1 = β (1 - u^ ⋆)^2/√(2 u^ ⋆ - 1) ,with β2 α √(1 + δ)/σ .Under the same condition on u^ ⋆, we can transform Equation (<ref>) into the following algebraic equation:(3 u^ ⋆ - 1)^2·(2 u^ ⋆ - 1) = β^ 2·(1 - u^ ⋆)^4 .It can be shown (see <ref>) that there exists a unique positive root to this equation, which belongs to the interval u^ ⋆ ∈ ( , 1 ). All other remaining quantities such as ς, m^ ⋆ and q^ ⋆ are determined after finding u^ ⋆. The velocity c can be found from the mass conservation equation written at the wave front:(c - w_ b)· m_ b + (c - u_ f)· m_ f = c· m_ s ,w_ b > c .If m_ s ≡ 0, the last equation takes a particularly simple form:(c - w_ b)^3/(U_ b)^3 + (_ b)^ -2/3 c/U_ b - 1 = 0 ,where we introduced some notations:U_ b√(_ b) , _ bu_ b/√(m_ b) = √(α/c_ w) .We remind that the positive constant c_ w controls the magnitude of bottom friction effects. We thus arrive to the following cubic polynomial equation for the quantity CcU_ b:(C) ≡ (1 - u^ ⋆)^3 C^ 3 + (_ b)^ -2/3 C - 1 = 0 .It is not difficult to see that the polynomial (C) is a monotonically increasing function of C provided that u^ ⋆ ≤ 1 and, thus, there exists a unique positive root (since (0) =-1 < 0). The dependence of the computed in this way coefficient C = C_ e (_ b) = C_ e (ϕ) on the channel slope angle ϕ is shown in Figure <ref> with the dashed line (for a fixed value of c_ w = 0.004 and with σ = 0.15, δ = 4). Note that in the experimental study <cit.> the notation C ≡ cU_ 0 was used, while we consider another definition C ≡ cU_ b, where U_ 0√(_ 0). We can see that this prediction does not compare very well with the experimental data <cit.> for large slope angles ϕ. This drawback will be corrected below. The agreement for small angles of inclination is achieved since _ j ≪ _ b and, consequently, U_ 0 - U_ b ≪ U_ b. Therefore, the total mass flux _ 0 can be replaced by the bottom flux _ to find the density current head velocity. It is also worth to mention that the big scatter in experimental data depicted in Figure <ref> can be also partly explained by the differences between the total _ 0 and bottom _ b mass fluxes, which depend on precise inflow conditions.§.§.§ Further considerations The stability of the constructed travelling wave solution with the linear growth of the profile (due to the perpetual entrainment of sediments and still water into the mixing layer) has to be studied separately. The constructed solution belongs to the important class of stratified flows with fluid particles (in some parts of this flow) moving faster than the wave front. It is known that for surface waves it leads to inevitable wave breaking <cit.>. The authors are not aware of any mathematical stability studies of such stratified flows.In the modelling of density currents, where there are heavy particles (of density ρ_ 0) entrained into the flow, the “boundary” conditions imposed on the wave front have the capital importance. In the previous Section these conditions were determined by the flow structure and we saw that it leads to the poor prediction of the wave celerity comparing with experimental data <cit.>. Here we propose another set of “boundary” conditions:m·(c - u) + b_ 0 ζ_ b·(c - w_ b)= 0 , h·((u - c) u +m))= 0 , (c - u)^2= m ,which are composed of the mass conservation equation, the flow criticality condition relative to the wave front and also the momentum conservation equation. It is not difficult to see that for u < c, it follows that u =c and for the dimensionless combination C = c/U_ b we obtain the following cubic equation:8/27 C^ 3 + _ b^ -2/3 C - 1 = 0 .All intermediate computations are explained in <ref>. For large values of the Froude number, the solution C ≈ 1.5. This estimation is in good agreement with the empirical conjecture on the density current flow head velocity based on the experiments reported in <cit.>:c ≈ 1.5×_ 0^ 1/3 .In Figure <ref> the solid line shows the dependence of the flow head celerity C = C_ b (ϕ) based on Equation (<ref>). The overall good agreement of the lower solid curve with the experimental data <cit.> certifies the model quality. It is worth to notice that the proportionality coefficient in experimental studies was determined based on velocity _ 0^ 1/3 and not on velocity U_ b. As a result, for more accurate comparisons with considered models, one has to determine accurately, which part of the mass flow _ 0 entered into the boundary layer, since U_ b = _ b^ 1/3 = (μ _ 0)^ 1/3, with μ ≤ 1. §.§ Similarity solutionsAdditionally to special solutions considered in two previous Sections, System (<ref>) – (<ref>) admits also the following class self-similar solutions:h (x, t) = t^ +1 ĥ (ξ) ,u (x, t) = t^û (ξ) , m (x, t) = t^ 2m̂ (ξ) ,q (x, t) = t^q̂ (ξ) ,where ξxt^ +1 and ∈. For flows with the sustained mass flow _ b ≡ or with the sustained sediment mass m_ s ≡, from conditions on the front it follows that ≡ 0. For the sustained mass flow in the turbulent mixing layer _ j ≡, the same value forfollows from boundary conditions, ≡ 0. Henceforth, for all just mentioned cases, we look for solutions of the form:h (x, t) = t·ĥ (ξ) ,u (x, t) = û (ξ) ,m (x, t) = m̂ (ξ) ,q (x, t) = q̂ (ξ) ,with ξxt. Such solutions play a very important rôle in understanding the system (<ref>) – (<ref>), since dynamic solutions tend to self-similar ones at large times provided that constant mass fluxes are maintained.Another important class of self-similar solutions (<ref>) is realized when _ 0 ≡ 0, m_ s ≡ 0. This corresponds to the evolution of a finite mass of a heavier liquid, which propagates under the layer of a lighter fluid. In this case, the mass conservation law∫_ 0^ +∞ m (t, x)x =yields self-similar solutions of the form (<ref>):h (x, t)=t^ 2/3 ĥ (ξ) ,u (x, t)=t^ -1/3 û (ξ) , m (x, t)=t^ -2/3 m̂ (ξ) ,q (x, t)=t^ -1/3 q̂ (ξ) ,where ξxt^ 2/3. This self-similar solution indicates <cit.> that the wave front position x_ f behaves asymptotically in time as:x_ f ∝ t^ 2/3 .From the last estimation, the following asymptotic behaviour of the front velocity U_ f can be readily deduced:U_ f ∝ x_ f^-1/2 .The last decay law was validated experimentally in <cit.>. In the present work this asymptotic behaviour will be used to validate the numerical simulations. This approach to the description of turbidity currents is referred as the “thermal theory” <cit.>.In turbidity flows, in order to construct self-similar solutions for density currents along a slope (as well as for travelling waves), it is of capital importance to prescribe the adequate boundary conditions. During the construction of travelling waves above, we see that the wave celerity depends on the relations imposed at the wave front. When the solution is given by ansatz (<ref>), the flow head is uniquely determined. However, the applicability of such similarity solutions to flows in the absence of sediments (m_ s ≡ 0, _ b ≡ 0) has to be studied separately.The situation changes when we consider the problem of sediments entrainment into the flow (m_ s > 0, _ b ≡ 0). In the framework of the conservative system (<ref>) – (<ref>) this problem can be interpreted as the mixed Initial–Boundary Value Problem (IBVP). Namely, at the initial moment of time we know the distribution of heavy sediments in the sediment layer h_ s (x), m_ s (x). The sediments might be at rest (u_ s (x) ≡ 0) or moving with prescribed velocity u_ s (x) and turbulent kinetic energy q_ s (x). The perturbations entraining the sediments into the flow are entering the fluid domain from the left boundary (without any loss of generality). In Figure <ref> we depicted the sketch of the fluid domain for an illustration. In this situation there is no need to separate the flow head and to write additional relations on the wave front, since the main flow characteristics are obtained in the process of solving the IBVP. However, the main question remains unanswered: which self-similar regime (<ref>) will appear as the long time limit of the unsteady solution?System (<ref>) – (<ref>) is of hyperbolic (and hydrodynamic) type and it corresponds to the flow of a barotropic gas with chemical reactions in compressible fluid dynamics. We have already mentioned above the analogy between density currents and the detonation theory. During the normal detonation, the fluid flow satisfying the Chapman–Jouguet conditions downstream the flow head is not always realized (especially in the presence of accompanying chemical reactions). Under certain conditions the wave of detonation might propagate with the velocity exceeding that of perturbations behind the wave front <cit.>. The main conclusion that we can draw from this analogy is that the realizability of self-similar solutions has to be studied separately. Such an analytical study might turn out to be very complex. Below we will show by numerical means that the propagation of the flow head with the speed higher than expected is possible. We would like to mention also that similarity solutions for gravity currents were constructed also in <cit.> and <cit.>.§ MODEL VALIDATION AND UNSTEADY SIMULATIONSStrictly speaking, we already validated steady solutions to System (<ref>) – (<ref>) by making comparisons with the experiments from <cit.>. We show that this system is able to predict qualitatively and quantitatively the development of the mixing layer over a slope. §.§ Problem formulation In this Section we continue the validations by considering unsteady solutions hereinafter. Moreover, we shall consider the applicability of an even simpler one-layer model: h_t + [ h u ]_x= χ^+ , u_t + u u_x + b h_x +h b_x= -b d_x - b_0 ζ_x - ψ χ^- + u χ^+/h , b_t + u b_x= - b_ 0 χ^ - + b χ^ +/h , q_t + u q_x= (2 q h)^-1 [ (2 ψ u - ψ^2 + b_0 h - 2 b h) χ^-+ (u^2 - q^2 - b h) χ^+ - κ q^3 ] .We apply it to simulate the sediments entrainment process by density currents over moderate (finite) slopes. Numerical solutions will be compared with experimental data reported in <cit.> as well as with exact special solutions of travelling wave type. In their experiments the dense fluid consisted either of saltwater or of the sawdust particle suspension. The shape of these particles was irregular in accordance with suspended snow flakes. The spatial growth of the cloud was determined from the side view images recorded with a video camera. A 5 square grid was drawn on the side glass to facilitate the front position determination. Moreover, we shall show below that under certain initial conditions the numerical solution will tend asymptotically to self-similar solutions described earlier.We made our choice for experimental data of Rastello & Hopfinger (2004) <cit.> for the following reasons: * This experiment corresponds fairly well to the scope and purpose of our numerical model* The model (<ref>) – (<ref>) is suitable for the simulation of gravity currents even over moderate and large slopes used in the experimental study <cit.> (see Table <ref> for the values of the slope parameter). When using other models, one has to check that the influence of the sediment bottom layer is taken into account to represent correctly the front dynamics* The experiment was conducted for sufficiently long time. In this way we are able to check our model validity for the acceleration and deceleration stages of the flow. Finally, we were even able to check the asymptotic behaviour of the flow, which is very typical for buoyant flows over a slope.We already mentioned above that the equilibrium system (<ref>) – (<ref>) possesses the hyperbolic structure similar to the equations of gas dynamics with two “sonic” and two contact characteristics <cit.>. There is a wide choice for the numerical discretization of such systems. However, taking into account that we deal with a system of conservation (balance) laws, it seems to be natural to opt for finite volume schemes <cit.>. Henceforth, in order to solve numerically the equilibrium system (<ref>) – (<ref>) we use the classical and robust finite volume discretization and the widely used Godunov scheme <cit.>. The explicit Euler scheme is used in time discretization.Sketch of the numerical experiment is given in Figure <ref>. Our goal is to reproduce in silico some of the laboratory experiments reported in <cit.>. The values of all physical parameters are given in Table <ref>. The initial and boundary conditions are rather standard as well. Initially, at t = 0, the distribution of evolutionary quantities is given onthe computational domain [ 0, ℓ ]:h (x, 0) = h_ 0 (x) ,u (x, 0) = u_ 0 (x) ,m (x, 0) = m_ 0 (x) ,q (x, 0) = q_ 0 (x) .On the right boundary we set wall boundary conditions for simplicity[In any case, the simulation stops before the mass reaches the right boundary. So, the influence of the right boundary condition on presented numerical results is completely negligible.]. On the left extremity of the computational domain the boundary conditions depend on the flow regime in the vicinity of the left boundary. If the flow there is supercritical, we impose Cauchy's data. Otherwise, we impose a wall boundary condition as well. The case we study below is depicted in Figure <ref> and it will be described in more details below. §.§ Experimental set-up In order to reproduce in silico the experiments of Rastello & Hopfinger (2004) <cit.> we use the following configuration of the numerical tank. The channel length ℓ is equal to 200. As we already mentioned, on boundaries we impose wall boundary conditions, in other words the channel is closed as in experiments. A heavier fluid of buoyancy b_ ℓ ∈ {1, 19} ^ 2 fills an initially closed container of the length of 20 and of variable height h_ ℓ, which changed from one experiment to another. This configuration corresponds to the classical lock-exchange experiment. The heavy fluid “mass[We employ the term mass in the sense of the relative weight of the dense fluid.]” m_ ℓ = b_ ℓ· h_ ℓ. At the distance of 50 from this recipient the slope is covered by initially motionless sediment layer of the height ζ_ s = 0.2 and 2. The “mass” of sediments is m_ s = b_ s·ζ_ s. The length of sediments layer is 100. During the propagation of the heavy fluid head all these sediments were entrained into the flow. In some experiments the sediment layer was absent, m_ s ≡ 0. In this case the flow simply propagates over the rigid inclined bottom. In order to avoid the degeneration of certain equations, in numerical experiments we cover the whole slope with a micro-layer of sediments with mass m_ s^ ∘ > 0, such thatm_ s^ ∘ ≪ m_ s m_ s^ ∘ ≪ m_ ℓ .The values of all physical parameters are given in Table <ref>. The influence of the model parameters on predicted values was found to be rather weak. Consequently, in all numerical simulations reported below we used the following values of these parameters:σ = 0.15 , δ = 0 ,c_ w = 0 .The results of the critical comparisons with laboratory data are discussed below. §.§ Numerical results In Figure <ref> we report the results of numerical simulations showing the spatial profiles of four quantities h, u, m and q. These simulations were performed without the sediments layer, m_ s ≡ 0. The panels correspond to: Left panel Slope angle ϕ = 32^∘, the snapshot is taken at T = 10Right panel Slope angle ϕ = 45^∘, the snapshot is taken at T = 20.All quantities reported in Figure <ref> are given in dimensional variables. The position of the density current front is clearly visible in both cases despite the fact that the IBVP was solved in the domain [ 0, ℓ ]×[ 0, T ] using simple shock-capturing methods (no special treatment was necessary to detect the front). This property of the employed numerical scheme will be used to analyze below the asymptotic behaviour of unsteady solutions.In Figure <ref> we show the dependence of the flow head velocity U_ f on the distance x_ f traveled by the front. The parameters of these numerical/laboratory Experiments 1 & 2 are given in Table <ref>. Please, note that the heavy fluid density in Experiment 1 is much higher. The last observation explains why the head velocity in Experiments 1 is higher than in Experiments 2, even if the slope is bigger in Experiments 2. Various symbols (◯ and ▽) correspond to laboratory measurements of the head velocity and are taken from <cit.>. Solid and dash-dotted lines (with ◯ and ▽) indicate the experiments with (m_ s > 0) and without (m_ s = 0) sediments correspondingly. The presence of sediments before the head front yields the initial acceleration of the flow. This acceleration phase is followed by deceleration, since sediments are distributed over a limited distance in our experiments. A very good agreement with numerical predictions can be observed in Figure <ref>.In Figure <ref> we show the long time behaviour of the front velocity from the Experiment 2 reported in Figure <ref> with parameters from Table <ref>. In order to perform this simulation we increased the computational domain[This operation can be made easily in numerical experiments contrary to laboratory experiments, where the channel size is rather fixed.] from 200 to 500. Sediments were absent along the slope in this computation (m_ s ≡ 0). The excellent agreement with theoretically established asymptotics U_ f ∼ 1√(x_ f) can be observed in this Figure <ref>. It validates both the numerics and the underlying theoretical argument.§ DISCUSSIONAbove we have proposed and tested a new model for density currents. We outline below the main conclusions and perspectives of this study. §.§ Conclusions In the present study we considered the problem of density current modelling propagating down the slope in the presence of sediment deposits on the rigid bottom. In order to describe this flow in mathematical terms, we divide it into three zones (from the bottom vertically upwards): the layer of sediments (i), the mixing layer (ii) and the still water layer (iii). In order to simplify the mathematical description, we assumed that the upper layer (iii) of the still water was motionless. The depth-averaged description was adopted in each of two remaining layers (i) and (ii). As a result, we arrived to a shallow water two-layer system including turbulent modelling, which can be recast in a conservative form (<ref>) – (<ref>) of quasi-linear balance laws <cit.>. The initially proposed system contains only six evolution equations to describe the complex density current. However, it can be further simplified if we assume that the flow in the bottom layer occupied by dense liquid or suspension with uplifted sediments is in the equilibrium state, the gravity force is exactly balanced by the friction forces with the rigid bottom. In this way, we can remove two equations corresponding to the bottom layer (i). The sediment equilibrium model is given by System (<ref>) – (<ref>). This concludes the modelling part of our study.Then, the proposed equilibrium model is studied using analytical means. Namely, we were interested in some special, but very important classes of solutions — steady states, travelling waves and similarity flows. The linear stability of travelling waves was studied. The initial conditions which yield asymptotically (in time) some special solutions (such as self-similar and travelling waves) were discussed. The velocity of travelling waves is of capital importance for the understanding of turbidity flows propagation. Namely, the travelling wave speed gives an estimation of the flow head propagation along the slope. This prediction of our travelling wave analysis was checked against the experimental data from numerous previous studies <cit.>.The mixing layer formation was discussed in the framework of steady solutions and our theoretical prediction was compared against experimental data from <cit.>. Moreover, unsteady solutions predicted by our model were compared with density flow experiments made in the LEGI (UMR 5519) Laboratory by Rastello & Hopfinger (2004) <cit.>. The model predictions are in good agreement with their measurements of the front velocity. Finally, the asymptotic behaviour for the flow head velocity as a function of the front position was measured in our numerical computations and an excellent agreement was found with existing theoretical estimations. This concludes the validation of the model and also of several particular solutions derived in this study.It is important to underline that the proposed model describes three different flow regimes: * Exchange flow,* Thermals,* Underwater avalanches.In the first case, the permanent (or transient) inflow of a heavy fluid mass accelerates downslope. In natural environments this situation can be realized when a heavy fluid overflows a bottom obstacle or a hill, which creates a deposit of the heavy fluid mass behind the obstacle. Our mathematical model showed that the initial accelerating phase in the development of the supercritical flow is of capital importance since it yields the formation of the so-called mixing layer. In particular, the density current head velocity depends directly on the portion of the mass entering the bottom layer. It is this sea-bed underflow, which supplies the head directly almost without mixing. This fact may explain also the relatively important scattering of observed front velocities in laboratory experiments (see Figure <ref>).In order to describe the evolution of thermals over a downslope, one can use the simplified model (<ref>) – (<ref>), which is a direct generalization of usual nonlinear shallow water equations taking into account the ambient quiescent fluid entrainment into the flow. It is important to notice also that the entrainment speed in our model is determined automatically together with other flow parameters. Figure <ref> showed also that this model can be used to describe thermal flows during the acceleration and deceleration phases simultaneously.Density currents over a sloping bottom featuring sediments entrainment into the flow have numerous applications in geophysics such as snow and underwater avalanches, seasonal mass exchanges in deep lakes, These flows are situated in the center of different mathematical modelling efforts mentioned in Introduction. Unfortunately, contrary to the previous two situations (lock-exchange and thermal flows), laboratory experiments and real-world measurements still remain seldom. It is in this field of applications that the rôle of simple mathematical models increases substantially. Such models should allow to predict theoretically at least the front velocity and, if possible, several other parameters of the self-sustained current. In the particular situation where we neglect the sedimentation velocity, the one layer model (<ref>) – (<ref>) is suitable for the description of underwater avalanches. Moreover, our computation showed that the flow achieves quickly a self-similar regime, which will be studied in future publications. However, in contrast to exchange flows, the condition (<ref>) of the flow criticality behind the wave front (analogous to the Chapman–Jouguet condition in detonation) is not fulfilled in the presence of sediments entrainment. It is another flow regime, which is realized in practice: the flow turns out to be substantially supercritical in the frame of reference moving with the front. This situation is usually referred to as “overdriven detonation”, when the main flow parameters, such as detonation pressure and propagating velocity, exceed the corresponding Chapman–Jouguet values <cit.>. A further study is necessary to understand the front velocity selection mechanism for such flows. In this way we come naturally to the description of some other perspectives of our study. §.§ Perspectives In the present study, we have employed only basic legacy first order Godunov finite volume schemes <cit.>. Their principal advantages are the ease of implementations and the robustness of numerical results. However, the accuracy might be also important in some applications, where a quantitative prediction is critically important. Consequently, some high order finite volume well-balanced schemes have to be developed to solve numerically the density current models proposed in our study. This technology is relatively well mastered nowadays <cit.>. The system of equations presented earlier in this article is a simplified equilibrium model based on a number of physical assumptions. Of course, more complete models of higher physical fidelity should be developed as well in the future along the lines outlined in the monograph <cit.>. For instance, in this manuscript we considered density currents with uniform cross-sections, 2D flows. In future works 3D effects have to be included. Moreover, in all examples considered earlier, the uniform bed slopes was used. In part, it comes from the experimental set-up used in previous investigations. However, in upcoming works the interaction between the fluid flow with the bed morphology has to be investigated to advance our understanding of these processes. §.§ AcknowledgmentstocsubsectionAcknowledgmentsThis research was supported by the project Aval (AAP Montagne 2015, University Savoie Mont Blanc) and Russian Science Foundation (project 15-11-20013). V. Liapidevskii acknowledges the CNRS/ASR Cooperation Program under the project №23975 and the hospitality of the University Savoie Mont Blanc during his numerous visits. The authors are grateful to Professors L. Armi & G. Pawlak for their help in preparing this manuscript.§ AN EXACT SOLUTIONIn the Appendix we present the construction of a particular steady solution. Using the information presented above in Sections <ref> and <ref>, we construct in this Section the profile of a particular exact solution to Equations (<ref>) – (<ref>). We assume that in the bottom layer ζ ≡ ζ_ b and w ≡ w_ b. Moreover, we suppose that the total mass inflow _ 0 = b_ 0 ζ_ 0 w_ 0 is prescribed on the left boundary. This incident mass flux _ 0 splits into two streams:_ 0 = _ j + _ b , _ j1/2 b_ 0 h_ L u_ L , _ bb_ 0 ζ_ b w_ b .We assume also that the ratio μ_ j_ 0 ∈ (0, 1 ] is fixed. Thus, _ b can be also easily deduced.The flow at any fixed moment of time t = t_ m > 0 is composed of two parts: * Stationary flow: u (x, t) ≡ u_ j, m (x, t) ≡ m_ j, q (x, t) ≡ q_ j and h (x, t) = h_ j + ς_ j· x, where x ∈ [ 0, L_ j ], L_ j = u_ j· t_ m.* Travelling wave: This part depends on the variable ξ = x - c· t and u (ξ) ≡ u_ f, m (ξ) ≡ m_ f, q (ξ) ≡ q_ f and h (ξ) = h_ f - ς_ f·(x - L_ t), where ς_ f > 0, L_ t - L_ f ≤ x ≤ L_ t and L_ fu_ f· t_ m, L_ tc· t_ m.The problem consists in finding all solution parameters. The solution algorithm is summarized below. §.§ Steady flowLet au_ jU_ j be the ratio of two speeds with U_ j = √(_ j) and za^3. Then, one has to find the unique root z > 1 to the following equation:(1 - 1/z)·(1 + 2 z)^2 = 4 α^2 (1 + δ)/σ^2β^2 ,which is to be compared with Equation (<ref>). It is equivalent to finding the root to the following cubic polynomial:_ 1 (z) = 4 z^3 - (3 + β^2) z - 1 .Let z^⋆ be the required root. Then, we find a = √(z^⋆) and u_ j = a U_ j. We know that _ j = U_ j^ 3 = m_ j u_ j. Hence, m_ j = U_ j^ 2a. The remaining variables are:q_ j = √(u_ j^ 2 - m_ j/1 + δ) , ς_ j = σ· q_ j/u_ j ,b_ L = b_ 0/2 ,h_ L = m_ j/b_ L .Finally, the steady part of the turbulent jet layer depth is equal toh (x, t) = h_ L + ς_ j· x ,0 ≤ x ≤ L_ j = u_ j· t_ m .§.§ Travelling wavesFor the travelling wave part we have _ b = U_ b^ 3 and _ bw_ b√(b_ 0 ζ_ b). Let u^ ⋆u_ f/c be the dimensionless flow velocity. It can be found by solving the following algebraic equation, which possesses a real root on the interval u^ ⋆ ∈ (, 1):_ 2 (u^ ⋆) = β^ 2 (1 - u^ ⋆)^4 - (2 u^ ⋆ - 1) (3 u^ ⋆ - 1)^2 = 0 ,where the constant β was defined earlier in (<ref>). The polynomial _ 2 (u^ ⋆) can be expanded:_ 2 (u^ ⋆) = (u^ ⋆)^4 - (4 + 18 β^ -2) (u^ ⋆)^3 + (6 + 21 β^ -2) (u^ ⋆)^2 - (4 + 8 β^ -2) u^ ⋆ + 1 + β^ -2 .Let us assume that we have found the required root u^ ⋆ ∈ (, 1). Then, we solve another polynomial equation to determine C = c/U_ b:_ 3 (C) = (1 - u^ ⋆)^3 C^ 3 + _ b^ -2/3 C - 1 = 0 .The last equation admits a unique positive root C^ ⋆ = C^ ⋆ (_ b) = C_ e (ϕ) as well. The last dependence is shown in Figure <ref> with the dashed line. Then, we consider the flow in the bottom layer. We have the following relations:_ b = b_ 0 ζ_ b w_ b ≡ U_ b^ 3 ,w_ b = _ b √(b_ 0 ζ_ b) ≡ _ b √(m_ b) , m_ b = _ b^ -2/3·_ b^ 2/3 , ζ_ b = m_ b/b_ 0 .We can notice that w_ b can be also written as w_ b = _ b^ 2/3 U_ b. Now we come back to the travelling wave:c = C^ ⋆ U_ b ,u_ f = u^ ⋆ c ,and from the relation(c - u_ f)· b_ 0 h_ f = (w_ b - c)· m_ bwe can findh_ f = (w_ b - c)· m_ b/(c - u_ f)· b_ 0 ,provided that (c - u_ f)· b_ 0 ≠ 0. Finally, we find two last elements of the solution:q_ f = √(2 u_ fc - 1/1 + δ) , ς_ f = σ q_ f/c - u_ f .The travelling wave profile is then given byh (ξ) = h_ f - ς_ f·(x - L_ t) ,L_ t = c· t_ m ,L_ f = u_ f· t_ m .The last profile is located in the segment x ∈ [ L_ t - L_ f, L_ t ]. The solution exists if u_ j ≤ u_ f. We underline that in the region between the stationary and travelling wave parts the flow in general is unsteady. Thus, the indicated boundaries L_ t - L_ f and L_ t are rather approximations to the reality. Two such exact solutions for two different values of the parameter μ = 0.05 (left panel) and μ = 0.654 (right panel) are depicted in Figure <ref>. This picture shows that at least two different configurations can be realized in practice: Two-wave (left panel (a)) when the main portion of the heavy fluid enters into the boundary layer (μ = 0.05)One-wave (right panel (b)) when the flow in the buoyant jet reaches the head front (μ = 0.654).In experiments in order to highlight the processes happening in the flow, the heavy fluid of density ρ_ 0 is in general coloured. Other experimental visualization techniques are also available, but the fluid painting is the most widely used one. By ρ_ a we denote the density of the light ambient fluid. In such experimental conditions the visible interface between two fluids is located, where the following condition is satisfied:ρ_ v - ρ_ a ≥ ϖ·(ρ_ 0 - ρ_ a) ,where ρ_ v is the visible density and ϖ is a constant whose approximate value belongs to the segment [ 0.01,0.1 ] ∋ ϖ. For the sake of illustration, we depicted this visible interfaces in Figure <ref> with solid lines, while the exact analytical solutions were shown with dashed lines. In the preparation of the visible interface we assumed that the fluid density varies linearly inside the turbulent layer from ρ_ a to ρ_ 0. In other words, the virtual visible interface y = y_ v is determined by the following relation:ρ_ v - ρ_ a = (ρ_ 0 - ρ_ a)·(1 - y_ v/h) = ϖ·(ρ_ 0 - ρ_ a) .For the sake of convenience, the last condition can be equivalently reformulated in terms of the buoyancy variable:y_ v (x) =h (x)·(1 - ϖ b_ 0/b (x)) ,b (x) ≥ ϖ b_ 0 , 0 ,§ DERIVATION OF THE SPEED-FROUDE RELATIONFrom “boundary” conditions (<ref>), (<ref>) we deduce the following system of two equations with respect to unknowns u and c:(u - c) +m= 0 ,(c - u)^ 2= m .By solving this system we obtain the following solution (u ≠ 1):u =c ,m =c^ 2 .Furthermore, by definition we have_ b = w_ b/√(m_ b) .Thus,U_ b^ 3 = b_ 0 ζ_ b w_ b = m_ b w_ b = _ b m_ b^3/2 .Hence,m_ b^ 1/2 = _ b^ -1/3 U_ b .Taking into account the last results, the “boundary” condition (<ref>) becomes:m·(c - u) + U_ b^ 3·(c/w_ b - 1) = 0 .Or equivalently we have:m (c - u) U_ b^ -3 + (c·_ b^ -1· m_ b^-1/2 - 1) = 0 .By rearranging the terms in the last equation we obtain:8/27 c^ 3/U_ b^ 3 + _ b^ -2/3 c/U_ b - 1 = 0 .By introducing the dimensionless velocity Cc/U_ b we obtain the required Equation (<ref>).§ NOMENCLATUREIn the main text above we used the following notations (this list is not exhaustive): ≡ equal identically∝ proportional the left hand side is defined the right hand side is defined smaller than the approximate upper boundgreater than the approximate lower bound[footnote]We underline the fact that this bound is given approximatively.ϕ angle of the bottom slopeα local bottom slope,  α = tanϕh_ j total depth of the layer jρ_ j fluid density in the layer jζ total depth of the bottom layerb_ j buoyancy of the layer ju_ j depth-averaged horizontal velocity of fluid particles in the layer jv vertical velocity componentw horizontal velocity component in the bottom layerw transversal velocity component (in 3D)p fluid pressurem_ jb_ j h_ j “mass” contained in the fluid column_ jm_ j u_ j mass flux in the layer jq depth-averaged turbulent mean square velocity in the mixing layerξ characteristics speed (eigenvalues of the Jacobian matrix of the hyperbolic fluxes)λ_ 1, 2 eigenvalues in the stability studiesc dimensional velocity of the travelling waveC dimensionless wave velocity the dimensionless Froude numberℓ, L length scalest time variablex “horizontal” coordinate along the bottom slopey “vertical” coordinate normal to the bottomd (x) the bathymetry (depth) functiong gravity accelerationg̃ reduced gravity accelerationχ entrainment rateu_ ∗ friction velocity at the solid bottomtocsectionReferences abbrv
http://arxiv.org/abs/1704.08024v2
{ "authors": [ "Valery Liapidevskii", "Denys Dutykh", "Marguerite Gisclon" ], "categories": [ "physics.flu-dyn", "physics.ao-ph", "physics.class-ph" ], "primary_category": "physics.flu-dyn", "published": "20170426091443", "title": "On the modelling of shallow turbidity flows" }
[email protected] of Physics, Simon Fraser University, Burnaby, British Columbia, V5A 1S6, Canada Department of Physics, Utrecht University, Princetonplein 5, 3584 CC Utrecht, The Netherlands [email protected] of Physics, Simon Fraser University, Burnaby, British Columbia, V5A 1S6, CanadaWhen is keeping a memory of observations worthwhile?We use hidden Markov models to look at phase transitions that emerge when comparing state estimates in systems with discrete states and noisy observations.We infer the underlying state of the hidden Markov models from the observations in two ways: through naive observations, which take into account only the current observation, and through Bayesian filtering, which takes the history of observations into account.Defining a discord order parameter to distinguish between the different state estimates, we explore hidden Markov models with various numbers of states and symbols and varying transition-matrix symmetry.All behave similarly.We calculate analytically the critical point where keeping a memory of observations starts to pay off.A mapping between hidden Markov models and Ising models gives added insight into the associated phase transitions. When memory pays: Discord in hidden Markov models John Bechhoefer December 30, 2023 ================================================= § INTRODUCTION Problems requiring statistical inference <cit.> are all around us, in fields as varied as neuroscience <cit.>, signal processing <cit.>, and artificial intelligence (machine learning) <cit.>.A common problem is state estimation, where the goal is to learn the underlying state of a dynamical system from noisy observations <cit.>.In most cases, the ability to infer states improves smoothly as the signal-to-noise ratio of observations is varied.However, there can also be phase transitions in the ability to infer the most likely value of a state, as the signal-to-noise ratio of observations is varied <cit.>. Formally, phase transitions in inference can occur because problems of inference and statistical physics share common features such as the existence of a free-energy-like function and the requirement or desire that this function be minimized.Yet the extent to which these elements lead to common outcomes such as phase transitions is not yet clear.In this paper, we investigate the generality of these links in the context of a specific setting:the comparison of state estimates based on current observations with those based on both current and past observations. A simple setting for exploring such problems is given by hidden Markov models (HMMs). They are widely used, from speech recognition <cit.>, to economics <cit.>, and biology <cit.>. HMMs describe the evolution of a Markovian variable and the emission of correlated, noisy symbols.Taking the current emitted symbol at face value gives us a naive state estimate.However, in these correlated systems there is additional information in the history of emitted symbols, which we can use to find a more refined state estimate.Comparing the state estimates then reveals in which cases the additional information from keeping a memory of observations makes a difference.When the observed symbols as a function of time are Markovian, such as HMMs with no noise, there is no advantage to retaining past information. However, for more general systems, the situation is not clear.Intuitively, if the noise is low (and the entire state vector is observed), then there should be no advantage.But if the noise is high, then averaging over many observations may help, as long as the system does not change state in the meantime. The surprise is that the transition from a situation where there is no advantage to keeping a memory to one where there is can have the character of a phase transition.Such transitions have been observed in the specific case of two-state, two-symbol HMMs <cit.>.In this article, we ask how general this behavior is in HMMs:Do we observe these phase transitions [These transitions have the character of a phase transition in the sense that there is a free-energy-like function that is non-analytic at certain points.It may not have all characteristics of a thermodynamic phase transition. in more complicated models?] in more complicated models? How sensitive is the behavior of the phase transitions to the details of the model?And can we understand their origin?In Section <ref>, we introduce the theoretical background of the systems we study.Then, in Sections <ref>–<ref>, we introduce and characterize phase transitions in various generalizations of HMMs. In the appendices, we detail the calculation of a phase-transition in n-state, n-symbol HMMs, and in 2-state, 2-symbol models with broken symmetry.In an attempt to gain insight into the origins of the observed phase transitions, we also show how to map a two-state, two-symbol HMM onto an Ising model. § STATE ESTIMATION IN HMMSHMMs can be fully described by two probability matrices and an initial state. The evolution of the hidden state x_t is governed by a n-state Markov chain, described by an n × n transition matrix 𝐀 with elements A_ij = P(x_t+1=i|x_t=j).The observation of an emitted symbol y_t is described by an m × n observation matrix 𝐁, with elements B_ij = P(y_t=i|x_t=j). The matrix dimensions m and n refer to, respectively, the number of symbols and the number of states.A graphical representation of the dependence structure is shown in Fig. <ref>.The observations depend only on the current state of the system.Note that the observations as a function of time, described by P(y_t+1|y_t), generally do not have Markovian dynamics.We will refer to an n-state, m-symbol HMM as an n × m HMM. We assume that we have perfect knowledge of our model parameters, and we will focus on comparing state-estimation methods that do or do not keep a memory.In particular, we will compare the naive observation y_t of the HMM to the state estimate x̂_t^ f found through Bayesian filtering.The Bayesian filtering equations recursively calculate the probability for the system to be in a state x_t given the whole history of observations y^t <cit.>, where y^t = { y_1, y_2, …, y_t } is used as a shorthand for all past and current information.The probability is calculated in two steps: the prediction step P(x_t+1|y^t), and the update step P(x_t+1|y^t+1). The steps can be worked out using marginalization, the definition of conditional probability, the Markov property, and Bayes' theorem.The transition matrix and the previous filter estimate are needed to predict the next state, and the observation matrix together with the prediction are needed to update the probability.Together, they give the Bayesian filtering equations <cit.>, P(x_t+1|y^t) = ∑_x_t P(x_t+1|x_t) P(x_t|y^t) P(x_t+1|y^t+1) = 1/Z_t+1 P(y_t+1|x_t+1) P(x_t+1|y^t), with the normalization factorZ_t+1 = P(y_t+1 | y^t) = ∑_x_t+1 P(y_t+1 | x_t+1) P(x_t+1|y^t). The Bayesian formulation results in a probability density function for the state x_t.When the observations y_t are noisy, we cannot be completely sure that our observations and state estimates are correct.Long sequences of the same observation increase our belief that system is indeed in the observed state, according to Bayesian filtering.However, even after an infinitely long sequence of the same observation, there is always a chance that the system actually transitioned into another state during the last time step and that we are therefore observing an “incorrect" symbol (the symbol does not match the state):The probability to be in state x_t = i given the history of observations y^t is bounded by a maximum confidence level p^*_i that depends on the model's parameters and is defined as the probability to be in a given state after a long sequence of the same observation:p^*_i = lim_t →∞ P(x_t = i | y^t = i^t) ,where by y^t = i^t we mean { y_1 = i, y_2 = i, …, y_t = i }. It is important that the sequence of identical observations is long enough that making an additional identical observation does not change the probability. In Fig. <ref> a fragment of the evolution of a HMM, the underlying (unknown) state and observed symbols, is shown together with the Bayesian filtering probability calculated over the time series. For long sequences of identical observations, we see that the filtering probability levels off. Generally, each state will have a different maximum confidence level. We will return to the maximum confidence level in later calculations and discussions. Many applications, such as feedback control, depend on single-value estimates x̂_t rather than on probability distributions.Although statistics such as the mean and median are reasonable candidates for the“typical" value of a distribution (minimizing mean-square and absolute errors <cit.>), it is more convenient here to use the mode, or maximum, which is termed, in this context, the state estimate.For state estimates based on all past and current information, we define the filter estimatex̂_t^ f≡*arg max_x_t ( P(x_t|y^t) ). For the HMM shown in Fig. <ref>, Eq. (<ref>) implies that whenever the filter probability is above 0.5, the filter estimates the system to be in state 1; similarly, it is in state -1 when the probability < 0.5.Analogously, we define the naive state estimate to be based entirely on the current observation, with no use made of past observations, x̂_t^ o≡*arg max_x_t ( P(x_t|y_t) ). In the special case where there is a one-to-one correspondence between symbols y and elements of the internal state x, the quantity x̂_t^ o reduces to y_t, the symbol emitted at time t.More generally, the number of internal state components may be smaller than the number of observations, making the interpretation of the estimates more subtle.As we will see, the combination of defining a probability distribution for the state variable and then selecting its maximum leads to the possibility of phase transitions.When one uses other ways to characterize the state than the mode, e.g. the mean, one may not find the analytical discontinuities that we study.However, the arg max captures an interesting complexity of the probability density function that would be lost in taking the mean.This is illustrated in Fig. <ref>, where there is a transition in the arg max, at (b).By contrast, taking the mean of the distribution ignores the bimodal nature of the distributions and shows no transition.This argument also applies to observables, such as work, that are functions of filter estimates. To know when keeping a history of observations pays off, we need to determine under what conditions the two state estimates will differ.We quantify how similar two sequences of state estimates by defining a discord order parameter, D = 1 - 1/N∑_t=1^N d( x̂_t^ o, x̂_t^ f ), where the function d depends on the naive and filter state estimates: d( x̂_t^ o, x̂_t^ f ) = 1, x̂_t^ o = x̂_t^ f -1, x̂_t^ o≠x̂_t^ f . The discord parameter is zero when the state estimates agree at all times. In such a case, there is no value in keeping a memory of observations: the extra information contained in the past observations has not changed the best estimate from that calculated using only the present observation. Similarly, when D=2 the state estimates disagree at all times, the state estimates are perfectly anti-correlated. At intermediate values of D, keeping a memory can be beneficial. An HMM with a non-zero discord is illustrated in Fig. <ref>:an arrow indicates a point where the state estimate differs from the estimate based on the current observation.We are interested in the transition from zero to non-zero discord, where the state estimates start to differ, and where keeping a history of observations starts to pay off.The lowest observation probability that leads to a non-zero discord is the critical observation probability.We have just seen that after a long sequence of identical observations the probability to be in some state x_t reaches a maximum value. Thus, the first place where state estimates will differ is when a single discordant observation after a long string of identical observations does not change our belief of the state of the system (i.e., where the filter estimate no longer follows the naive estimate exactly).Mathematically, the threshold where the discord goes from zero to being non-zero for an n × n HMM is given by lim_t →∞ P(x_t+1 = i|y_t+1 = j, y^t = i^t)= lim_t →∞ P(x_t+1 = j|y_t+1 = j, y^t = i^t). for all states i,j ∈{ 1, 2, …, n },andj ≠ i.From Ref. <cit.>, the transition threshold for a symmetric 2 × 2 HMM with transition probability a and error rate b is b_ c = 12( 1-√(1-4a)) ( a ≤14),and b_ c= 1/2 for larger a values. In Sec. <ref>, we generalize this result by dropping the symmetry requirement. As found in <cit.>, the transitions are sometimes discontinuous and sometimes just have a discontinuity in their derivative. As far as we know, the distinction has not been explored.So far, we have only considered the extreme cases of no memory and infinitely long memory. What about a finite memory?In Fig. <ref>, we see that the filter reacts to new observations with a characteristic timescale.Indeed, since the filter dynamics for a system with n internal states is itself a dynamical system with n-1 states (minus one because of probability normalization), we expect filters to have n-1 time scales.This statement holds no matter how big or small the memory of the filter.As a numerical exploration confirms, there is geometric (exponential) relaxation with time scales that are easy to evaluate numerically, if difficult algebraically.Thus, an “infinite" filter memory need only be somewhat longer than the slowest time scale, and “no memory" need only be faster than the fastest time.A brief study of filters with intermediate time scales suggests that their behavior typically interpolates between the two limits. § SYMMETRY BREAKING IN TWO-STATE, TWO-SYMBOL HMMS In 2 × 2 HMMs where the symmetry in the transition and observation probabilities is broken, the probability matrices each have two independent parameters. We parametrize the transition-matrix probabilities as𝐀 = [ 1-a̅ + 1/2Δ a a̅ + 1/2Δ a; a̅ - 1/2Δ a 1-a̅ - 1/2Δ a ] , which depends on the mean transition probability a̅ = 1/2(A_21 + A_12) and the difference in transition probabilities Δ a = A_21 - A_12.When the difference between the transition probabilities is zero (Δ a = 0), the transition matrix is symmetric.The observation matrix is parameterized similarly, with a̅→b̅ and Δ a →Δ b.The matrix 𝐁 depends on the mean observation probability b̅ = 1/2(B_21 + B_12) and the difference in observation probabilities Δ b = B_21 - B_12.All matrix elements must be in the range [0,1] to ensure proper normalization.We restrict the off-diagonal elements (probability to transition to a different state or probability to make a wrong observation) to be < 0.5, to preclude anticorrelations.We use the set {1,-1} to label both states and the corresponding symbols for 2 × 2 HMMs.The discord parameter is calculated by generating a realization of an HMM using the transition and observation matrices, following Eq. (<ref>) and averaging over the entire chain.A plot of the discord for a 2 × 2 HMM with asymmetric transition probabilities is shown in Fig. <ref>. All points shown are averaged over 30,000 time steps. The expression defining the critical observation probability in Eq. (<ref>) simplifies greatly for 2 × 2 HMMs: lim_t →∞ P(x_t+1 = 1|y_t+1 = -1, y^t = 1^t) = 0.5. We write this in terms of the model's parameters and solve for the critical observation probability b̅ = b̅_ c. This corresponds to the lowest points b̅ in Fig. <ref> that are non-zero for a given a̅. The complete analytical calculations for the critical observation probability can be found in App. <ref>. Figure <ref> shows the analytical and simulated critical observation probabilities as a function of the mean transition probability a̅, for a system with symmetric observation probabilities and several different transition asymmetries. The curve labeled Δ a = 0.01 corresponds to the transitions in Fig. <ref>. The solutions agree with simulations, which are shown as circles in the same diagram. The discord becomes non-zero at lower mean transition probabilities for larger asymmetries. The phase transitions differ in location from those of the symmetric 2 × 2 HMMs, but they still exist. We find similar results in systems with symmetric transition matrices and asymmetric observation matrices, and in systems with both asymmetric transition and observation matrices <cit.>. Another approach to understanding these results is offered in App. <ref>, where we show that we can map 2 × 2 HMMs onto one-dimensional Ising models with disordered fields and zero-temperature phase transitions.In Fig. <ref>, we observe some additional jumps and kinks at error rates ( b̅ > b̅_ c) that can be interpreted as “higher-order transitions” in the discord.We have labeled two of such transitions by b̅_1 and b̅_2 in Fig. <ref>.The first of these is due to the asymmetry of this HMM.The threshold b̅_ c results from the observation sequence y^t = {1, … , 1, -1 }, whereas the slightly higher b̅_1 results from the sequence y^t = {-1, … , -1, 1 }, which gives a condition that is different when Δ a ≠ 0.The second of these transitions, b̅_2, marks the threshold where two discordant observations are needed to change the filter state estimate.That is, the observation sequence is y^t = {1, …, 1, -1, -1 }.For still higher values of b̅, there will be transitions where one needs more than two sequential discordant observations to alter the filter value.Further transitions can occur for finite arbitrary sequences, too.Higher-order transitions, however, are increasingly weak and harder to detect numerically. § MORE STATES AND SYMBOLS We have seen that phase transitions in the discord order parameter occur in both symmetric and asymmetric time-homogeneous 2 × 2 HMMs.In this section, we will study systems with more states and more symbols.To keep the number of parameters manageable, we will consider symmetric HMMs, and we will consider only two classes of states:an observation is either correct and the symbol is the “same” as the underlying state, or an observation is incorrect and the system emits an “other” symbol.We consider a straightforward generalization of symmetric 2 × 2 HMMs to symmetric n × n HMM, and a model that describes a particle diffusing on a lattice with constant background noise.We investigate whether the transitions that we have encountered so far exist in these systems, too.§.§ Symmetric n HMMs Let us now label the states and symbols 1, 2, …, n.We first consider a system with transition matrix 𝐀 = [ 1-a a/n-1 … a/n-1; a/n-1 1-a ⋱ ⋮; ⋮ ⋱ ⋱ a/n-1; a/n-1 … a/n-1 1-a ] , and an observation matrix 𝐁, which has the same form except that a → b.This system depends on only two parameters for a given number of states n: the transition probability, a, and the observation error probability, b. This transition matrix describes a system that has a probability 1-a to stay in the same state and equal probabilities to transition to any other state, a/n-1.The observation matrix describes a measurement with uniform background noise; there is a certain probability of observing the correct symbol 1-b and equal probabilities of observing any other symbol, b/n-1.Just as before, we calculate the discord parameter for these systems and study the transition to non-zero discord by finding the critical observation probability. The problem simplifies from the case discussed above, where 𝐀 is a general transition matrix. In App. <ref>, we write it out explicitly, and find two solutions: b_ c^(1) = 1/2(n-1)( (n-1) + (n-2) a - .. √((n-2)^2 a^2 - 2 n (n-1) a + (n-1)^2)), b_ c^(2) = n-1/n . For n=2, Eq. (<ref>) reduces to Eq. (<ref>). The threshold values of b are plotted in Fig. <ref> for various numbers of states and symbols n.The branches of the solutions that are increasing with increasing a are given by b_ c^(1), and the constant branches are given by b_ c^(2). The analytical and simulated values agree quite well, especially for smaller n.The n = 10 curve deviates from the simulations slightly at higher a.The area under the curves indicates the parameter regime where D=0, where the state estimates with and without memory agree.Above the critical error probability, the two state estimates differ.There are no discontinuities as b_ c→n-1/n; the curves are simply very steep.§.§ Diffusing particle We now consider an HMM that describes a particle diffusing on a lattice with constant background noise and periodic boundary conditions. The symmetric n × n transition matrix is given by 𝐀 = [ 1-a a/2 0 … a/2; a/2 1-a a/2 ⋱ 0; 0 a/2 1-a ⋱ ⋮; ⋮ ⋱ ⋱ ⋱ a/2; a/2 0 … a/2 1-a ] . The observation matrix is the same as the one in the previous section. Physically, the particle stays in the same place with probability 1-a or it diffuses one site to the left or right with probability a/2.The discord parameter as a function of the observation probability is plotted in Fig. <ref>. For visualization purposes, both the discord and the observation probability are scaled by a factor of n/(2n-2), where n is the number of lattice sites. The scaling is such that (n/(2n-2))D = 1 at (n/(2n-2))b = 0.5 for any integer n > 1.The transition to non-zero discord is smooth in this case; however, at higher b and D, a non-analytic jump is seen. § MORE SYMBOLS THAN STATES Finally, we consider an HMM with more symbols than states.In particular, consider an HMM with only two states, 1 and -1, and an even number of symbols n.We will also consider the n →∞ limit.The transitions and errors are once again taken to be symmetric, with 𝐀 = ( [ 1-a a; a 1-a ]), but the observation errors are now determined by a Gaussian distribution around the states.In particular, the elements of the observation matrix are determined by integrals over the Gaussian distribution of the desired state.For state 1, the observation probability for symbol i is b_i1 = P(y_t = i|x_t = 1) = 1/√(2 π)σ ∫_ℓ_i-1^ℓ_idxexp( -(x - 1)^2/2 σ^2) = 1/2[ ( ℓ_i - 1/√(2)σ) - ( ℓ_i-1 - 1/√(2)σ) ]. The boundaries ℓ_i of the integral are determined such that the probability of observing each symbol is equal when considering the sum of Gaussian distributions around each state. i/n = 1/√(8 π)σ ∫_- ∞^ℓ_idxexp( -(x - 1)^2/2 σ^2) + exp( -(x + 1)^2/2 σ^2)= 1/4[ 2 + ( ℓ_i - 1/√(2)σ) + ( ℓ_i + 1/√(2)σ) ]. The symmetry of the problem reduces the number of equations we need to solve: We know that ℓ_0 = -∞, ℓ_n = ∞, ℓ_n/2 = 0, and ℓ_(n/2)-j = - ℓ_(n/2)+j for integers j between 1 and n/2-1 (all for even n).Also, symmetry dictates that the probability of observing a symbol i given the state is -1, b_i(-1), equals b_(n-i+1)1.For n →∞ (infinitely many symbols), we use the probability density function directly rather than integrating over an interval.In systems with a finite number of symbols, we observe non-analytic behavior as the discord becomes non-zero.These phase transitions move to lower observation probabilities for a larger number of symbols.In systems with an infinite number of symbols, the discontinuities are not present.To confirm this observation, we study the critical error probability as a function of the number of symbols; see Fig. <ref>.The critical error probability is shown for a transition probability a = 0.30 as a function of 1/n. The behavior is similar for other transition probabilities and suggests that the critical observation probability goes to zero asymptotically.The (discontinuous) transitions disappear only in the limit of infinitely many symbols.In the n × n symmetric HMMs, we saw similar behavior: The critical error probability decreases as a function of the number of symbols (and states).However, if we look at the limit of n →∞ of the critical error probability of these HMMs (Fig. <ref>), the critical error probability does not go to zero. § CONCLUSION In this paper, we have investigated when keeping a memory of observations pays off in hidden Markov models.We used HMMs to look at a relatively simple system with discrete states and noisy observations.We inferred the underlying state of the HMM from the observations in two ways: through naive observations, and a state estimate found through Bayesian filtering (and decision making). We then compared the state estimates by calculating the discord, D, between the two.We were particularly interested in investigating a phase transition at the point where D becomes non-zero.Such transitions have been observed in symmetric 2 × 2 HMMs; here, we have seen that such behavior applies to more general models.We looked at asymmetric 2 × 2 HMMs, some symmetric n × n HMMs, and symmetric 2 × n HMMs.The general features of the discord stayed the same in all these systems: it starts at D=0 for b=0; it becomes non-zero at some critical error probability; and it increases for increasing error probability.In all these systems, we found a non-analytic behavior in the discord as a function of observation error probability (phase transition), except in the 2 × n case in the limit of infinitely many symbols, n →∞.Throughout this paper, we have defined the usefulness of memory in a rather narrow way:we ask when inferences using a memory are different or better than those that do not.But memory can have many more uses.In thermodynamics, Maxwell's demon and Szilard's engine showed that information that is acquired can be converted to work <cit.>.Bauer et al. analyzed a periodic, two-state Maxwell demon with noisy state measurements and showed that there are transitions very much analogous to the ones considered here between phases where measurements are judged reliable, or not <cit.>.When reliable, there is no advantage to keeping a memory.In biology, one can consider cells in noisy environments and ask whether keeping a memory of observations of this environment is worthwhile.For example, Sivak and Thomson showed, in a simple model, that for very low and very high ratios of signal to noise in the environment, memoryless algorithms lead to optimal regulatory strategies <cit.>.However, for intermediate levels of noise, strategies that retain a memory perform better.In contrast to the situations considered in this paper, they found no evidence of any phase transition.There were smooth crossovers between regimes. In another setting, Rivoire and Leibler have explored how information retention by populations of organisms can improve the ability of the population to adapt to fluctuating environment <cit.>.Again, in this setting, no phase transitions were encountered. Also, Hartich et al. showed that the performance of a sensor, characterized by its “sensory capacity,” increases with the addition of a memory but report nophase transition <cit.>. Thus, in this paper, we have explored a class of models where phase transitions occur generically as a function of signal-to-noise ratios.Yet, in many other applications, such transitions are not observed.Clearly, a better understanding is needed to clarify which settings will show phase transitions and which ones continuous crossovers between different regimes.§ CRITICAL ERROR PROBABILITY OF ASYMMETRIC 2 HMMSIn Eq. (<ref>), we defined the critical error probability b̅_ c as the lowest b̅ for given a̅, Δ a, and Δ b that results in a non-zero discord parameter.In this appendix, we calculate this threshold analytically.We start by writing out the left-hand-side of Eq. (<ref>) explicitly: lim_t →∞ P(x_t+1 = 1|y_t+1 = -1, y^t = 1^t) = lim_t →∞ P(y_t+1 = -1|x_t+1 = 1) ∑_x_t P(x_t+1 = 1|x_t) P(x_t|y^t = 1^t)/∑_x_t+1 P(y_t+1 = -1|x_t+1)∑_x_t P(x_t+1|x_t)P(x_t|y^t = 1^t) . We recognize several terms as part of the transition and observation matrices. The term P(x_t|y^t=1^t) relates to the maximum confidence level.We need to find the maximum confidence level in terms of the observation and transition probabilities. p_1^*= lim_t →∞ P(x_t = 1|y^t = 1^t) = lim_t →∞ P(y_t = 1|x_t = 1) ∑_x_t-1 P(x_t = 1|x_t-1) P(x_t-1|y^t-1 = 1^t-1) /∑_x_t P(y_t = 1|x_t) ∑_x_t-1 P(x_t|x_t-1) P(x_t-1|y^t-1 = 1^t-1) . We use normalization to write lim_t →∞ P(x_t-1 = -1|y^t-1 = 1^t-1) = 1 - p^*_1, and we have an expression only in terms of the maximum confidence level, transition probability and the observation probability. We then solve for p^*_1: p_1^*= 1/4(2a̅ - 1)(2b̅ - 1)( 2 + a̅(8b̅ - 2(3 + Δ b)) + 2b̅(Δ a - 2) - Δ a + X ),withX= √(4a̅^2 (1 + Δ b)^2 - 4a̅ (2b̅ - 1)(4b̅ - 2 - Δ a (1 + Δ b)) + (2b̅ - 1) ( 2b̅(4 + Δ a^2) - Δ a (4 + Δ a + 4 Δ b) - 4 ) ). Now we plug this expression, together with the transition and observation probabilities (Eq. (<ref>)) into Eq. (<ref>):(Δ b - 2b̅) ( (2b̅ - 1)(2 + Δ a) + 2a̅ (1 + Δ b)- X )/2(2b̅ -1) ( Δ a(1 - 2b̅) + 2 (Δ b - 1) -2a̅(1 + Δ b) + X ) = 1/2. Lastly, we solve for b̅ = b̅_ c. We find three solutions, of which only two lie in our region of interest, 0 ≤a̅,b̅≤ 0.5. Since the resulting expressions are complicated, we show the full solution only for the special case where Δ b = 0: b̅_ c^(1) = 1/6( 3 - Δ a + 3 - 12 a̅ + Δ a^2/ Y+ Y ) ,b̅_ c^(2) = 1/384( -64(Δ a - 3) - 32 ( 1 + i √(3)) (3 - 12 a̅ + Δ a^2)/ Y+32 i ( i + √(3)) Y ) , Y= ( 18 (a̅-1) Δ a - Δ a^3 + 3 √(3)√((11-4 a̅ (a̅+4)) Δ a^2 + (4 a̅-1)^3 + Δ a^4))^1/3. These solutions are plotted in Fig. <ref>.Note that the expression for b̅_ c^(2) is real for relevant branches.That is, for some values of a̅ and Δ a the expression is complex; however, all the branches we plot have a zero imaginary part. When we set Δ a = 0, b_c^(1) reduces to 1/2( 1 - √(1 - 4a̅)) for a̅≤ 1/4, and 1/2 for a̅≥ 1/4. These are the familiar solutions for symmetric 2 × 2 HMMs as found in <cit.> and App. <ref>.§ MAPPING TO ISING MODELSOne can map a symmetric 2 × 2 HMM onto a one-dimensional random-field Ising model <cit.>.Here, we generalize this mapping so that it applies to a general (asymmetric) 2 × 2 HMM.We start by defining a mapping from the transition and observation probabilities to the spin-spin coupling and the spin-field coupling constants, P(x_t+1|x_t)= exp(J(x_t) x_t+1 x_t)/2 cosh(J(x_t)) , J(x_t)= J_+ = 1/2log( 1-a̅ + Δ a/2 /a̅ - Δ a/2), ifx_t = 1 J_- = 1/2log( 1-a̅ - Δ a/2 /a̅ + Δ a/2), ifx_t = -1 , P(y_t|x_t)= exp(h(x_t) y_t x_t)/2 cosh(h(x_t)) , h(x_t)= h_+ = 1/2log(1-b̅ + Δ b/2 /b̅ - Δ b/2), ifx_t = 1 h_- = 1/2log(1-b̅ - Δ b/2 /b̅ + Δ b/2), ifx_t = -1 . We define the Hamiltonian ℋ≡ -log(P(x^N, y^N)), which, using the product rule of probability and the Markov property of the state dynamics, is ℋ =-∑_t=1^Nlog(P(y_t|x_t)) -∑_s=1^N-1log(P(x_s+1|x_s)) =-∑_t=1^N[ h(x_t) y_t x_t - log(2 cosh(h(x_t))) ] -∑_s=1^N-1[ J(x_s) x_s+1 x_s - log(2 cosh(J(x_s))) ]. Next, we rewrite the h(x_t) and J(x_t) in a convenient way: h(x_t)=h̅ + Δ h x_t, withh̅ = 1/2(h_+ + h_-) andΔ h = 1/2(h_+ - h_-), and the same for J(x_t) with h → J.When Δ a is zero, we have Δ J = 0 and J̅ = J, where J is the coupling constant found in the case of symmetric 2 × 2 HMMs <cit.>.The same happens with the h-terms when Δ b = 0.The terms consisting of a logarithm with a hyperbolic cosine can also rewritten by taking the mean value of the possible terms and a deviation from that mean value.The constant terms can be neglected since they lead only to a shift in the energy.Similarly, terms that depend only on a single factor y_t can also be neglected.Higher-order terms that depend on a product of these factors still contribute. The full Hamiltonian is now given by ℋ =-∑_t=1^N[ h̅ y_t x_t -1/2x_tlog( cosh(h̅ +Δ h)/cosh(h̅-Δ h))] -∑_s=1^N-1[ J̅ x_s+1 x_s + Δ J x_s+1 -1/2x_slog( cosh(J̅+Δ J)/cosh(J̅-Δ J))]. For large N, we can neglect boundary terms.Then rearranging the Hamiltonian so that one term represents the nearest-neighbor interactions and the others the local external fields, we find, ℋ = -∑_t J̅ x_t+1 x_t - [ h̅ y_t -1/2log( cosh(h̅ + Δ h)/cosh(h̅-Δ h)) -1/2log( cosh(J̅ + Δ J)/cosh(J̅-Δ J)) + Δ J ] x_t = -∑_t J̅ x_t+1 x_t - h̅ y_t x_t + C(J̅, Δ J, h̅, Δ h) x_t. The external field consists of a fluctuating term that depends on y_t and a constant term that depends transition and observation parameters.From Eq. (<ref>), it is clear that this Hamiltonian remains the Hamiltonian of the familiar Ising model.There is a constant spin-spin coupling term, the strength of which is determined by the transition probabilities a̅ and Δ a.Then there is the fluctuating term of the local external fields.The magnitude is constant and determined by the observation probabilities, but the direction is assigned randomly through y_t.Finally, there is a constant term in the external fields that depends on both the transition and observation probabilities.Above, we have seen that the filtering problem for a HMM can be mapped onto an Ising model. How useful is such a mapping?It does add intuitive language.The observations y_t play the role of a local spin at each site.From Eq. (<ref>), we see that a lower error rate (small b̅) corresponds to strong coupling between the local field and the local spin, which corresponds to state x_t.When the noise is so strong that an observation says nothing about the underlying state (b̅=1/2), then the coupling h=0.Likewise, deviations of a̅ from 1/2 determine the spin-spin coupling constant J.These results, however, were previously derived for symmetric 2 × 2 HMMs <cit.>.Here, we add the insight that generalizing to asymmetric dynamics (matrix 𝐀) or observation errors (matrix 𝐁) leads to the same qualitative scenario.The mapping remains a simple Ising model; only the coefficients are modified.It would be interesting to know whether such mappings work for more order parameters, where the corresponding spin problem is presumably a Potts model <cit.>.Although the Ising mapping gives some qualitative insights, it has limitations.In a closely related problem, estimating the full path x^t of states on the basis of observations y^t, the desired filter estimate corresponds to the ground state of the corresponding Ising model <cit.>.Here, by contrast, the filter estimate corresponds, in Ising language, to estimating the most likely value of the last (edge) spin of a 1d chain, without caring about the spin of any other site—a strange quantity! Thus, in mapping the filter state estimation problem to an Ising chain, we transform a familiar question concerning a strange system to asking a strange question of a familiar system.Whether such a swap leads to analytical progress beyond its value in forming a qualitative picture is not at present clear.§ CALCULATION OF CRITICAL OBSERVATION PROBABILITY FOR N HMMS Here, we compute the critical observation probability for symmetric n × n HMMs analytically. We need only consider one state/ symbol i and one j, thanks to the symmetry of the problem. For example, use i = 1 and j = 2 in Eq. (<ref>), which leads to lim_t →∞ P(x_t+1 = 1|y_t+1 = 2, y^t = 1^t) = lim_t →∞P(x_t+1 = 2|y_t+1 = 2, y^t = 1^t).Similar to the calculation in App. <ref>, we start with the calculation of the maximum confidence level, p^*.Since all states of a symmetric n × n HMM are equivalent, the maximum confidence levels are all the same.We calculate the maximum confidence for an arbitrary state i, p^* = lim_t →∞ P( x_t = i|y^t = i^t) = lim_t →∞1/Z_t P(y_t = i|x_t = i) ∑_x_t-1 P(x_t = i|x_t-1) P(x_t-1|y^t-1 = i^t-1). The first two terms in the numerator are known from the transition and observation matrix of the HMM.The last term is p^* if x_t-1 = i. For x_t-1≠ i, we can calculate it by demanding a normalized probability, lim_t →∞∑_x_t-1 P(x_t-1|y^t-1 = i^t-1)= 1 p^* + (n-1) lim_t →∞ P(x_t-1 = j|y^t-1 = i^t-1)= 1lim_t →∞ P(x_t-1 = j|y^t-1=i^t-1)= 1 - p^*/n-1 . Plugging all of the terms into Eq. (<ref>) and solving for p^* in terms of the model parameters leaves us with two solutions.Restricting interest to the solutions that take on positive values for 0 ≤ a, b ≤ 1 and integer n>1, we find, p^* =1/2 (a n - n + 1) (b n - n + 1)( (a-1)(b-1) n^2 +a+(b-2) n+1 . . + √(( (n-1)(b n - n + 1) - a ((b-1) n^2 + 1) )^2 - 4 a (b-1)(n-1) (a n - n + 1) (b n- n + 1))). With these preliminary expressions, we can calculate the critical error probability.From Eq. (<ref>), the left-hand side is P(x_t+1 = 1|y_t+1 = 2, y^t = 1^t)=P(y_t+1 = 2|x_t+1 = 1, y^t = 1^t) P(x_t+1 = 1|y^t = 1^t) / P(y_t+1 = 2|y^t = 1^t) =1/Z_t+1P(y_t+1 = 2|x_t+1 = 1) P(x_t+1 = 1|y^t = 1^t). The right-hand side is expanded in the same way.Writing out the individual terms of the equation leads tolim_t →∞ P(x_t+1 = 1|y^t = 1^t) = lim_t →∞∑_x_t P(x_t+1 = 1|x_t) P(x_t|y^t = 1^t)= ( 1 - a ) p^* + (n-1) a/n-11-p^*/n-1= ( 1 - a ) p^* + a 1-p^*/n-1 ,and Z_t+1 = P(y_t+1 = 2|y^t = 1^t) = ∑_x_t+1 P(y_t+1 = 2|x_t+1) P(x_t+1|y^t = 1^t).Plugging all these terms into Eq. (<ref>) and substituting p^* from Eq. (<ref>), we solve for the critical error probability b_ c as a function of a and n and find Eq. (<ref>). We thank Malcolm Kennett for suggestions concerning the Ising-model map and Raphaël Chétrite for comments on the manuscript.This work was funded by NSERC (Canada). 25 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[MacKay(2003)]mackay03 author author D. J. C.MacKay, @nooptitle Information Theory, Inference, and Learning Algorithms (publisher Cambridge Univ. Press, year 2003)NoStop [Efron and Hastie(2016)]efron16 author author B. Efron and author T. Hastie,@nooptitle Computer Age Statistical Inference: Algorithms, Evidence, and Data Science (publisher Cambridge University Press, year 2016)NoStop [Hopfield(1982)]hopfield82 author author J. J. Hopfield, title title Neural networks and physical systems with emergent collective computational abilities, @noopjournal journal Proc. Natl. Acad. Sci. USA volume 79, pages 2554–2558 (year 1982)NoStop [Coolen et al.(2005)Coolen, Kühn, and Sollich]coolen05 author author A. C. C.Coolen, author R. Kühn,and author P. Sollich, @nooptitle Theory of Neural Information Processing Systems (publisher Oxford Univ. Press, year 2005)NoStop [Van Trees and Bell(2013)]vanTrees13 author author H. L. Van Trees and author K. L. Bell, @nooptitle Detection Estimation and Modulation Theory, Part I: Detection, Estimation, and Filtering Theory,edition 2nd ed. (publisher Wiley, year 2013)NoStop [Murphy(2012)]murphy2012machine author author K. P. Murphy, @nooptitle Machine Learning: A Probabilistic Perspective (publisher MIT press, year 2012)NoStop [Goodfellow et al.(2016)Goodfellow, Bengio, and Courville]goodfellow16 author author I. Goodfellow, author Y. Bengio,and author A. Courville,@nooptitle Deep Learning (publisher MIT Press, year 2016)NoStop [Bechhoefer(2015)]bechhoefer2015hidden author author J. Bechhoefer, title title Hidden Markov models for stochastic thermodynamics, @noopjournal journal New J. Phys. volume 17 (year 2015)NoStop [Rabiner(1989)]rabiner1989tutorial author author L. R. Rabiner, title title A tutorial on hidden Markov models and selected applications in speech recognition,@noopjournal journal Proceedings of the IEEE volume 77, pages 257–286 (year 1989)NoStop [Huang et al.(1990)Huang, Ariki, and Jack]huang1990hidden author author X. D. Huang, author Y. Ariki,andauthor M. A. Jack, @nooptitle Hidden Markov Models for Speech Recognition,Vol. volume 2004 (publisher Edinburgh University Press, year 1990)NoStop [Hamilton(1989)]hamilton1989new author author J. D. Hamilton, title title A new approach to the economic analysis of nonstationary time series and the business cycle, @noopjournal journal Econometrica volume 57, pages 357–384 (year 1989)NoStop [Mamon and Elliott(2007)]mamon2007hidden author author R. S. Mamon and author R. J. Elliott, @nooptitle Hidden Markov Models in Finance, Vol. volume 104 (publisher Springer Science & Business Media, year 2007)NoStop [Durbin et al.(1998)Durbin, Eddy, Krogh, and Mitchison]durbin1998biological author author R. Durbin, author S. R. Eddy, author A. Krogh,and author G. Mitchison, @nooptitle Biological Sequence Analysis: Probabilistic Models of Proteins and Nucleic Acids (publisher Cambridge University Press, year 1998)NoStop [Krogh et al.(1994)Krogh, Brown, Mian, Sjölander,and Haussler]krogh1994hidden author author A. Krogh, author M. Brown, author I. S. Mian, author K. Sjölander,and author D. Haussler, title title Hidden Markov models in computational biology: Applications to protein modeling, @noopjournal journal J. Mol. Biol. volume 235,pages 1501–1531 (year 1994)NoStop [Bauer et al.(2014)Bauer, Barato, and Seifert]bauer14 author author M. Bauer, author A. C. Barato, and author U. Seifert,title title Optimized finite-time information machine, @noopjournal journal J. Stat. Mech. , pages P09010 (year 2014)NoStop [Note1()]Note1 note These transitions have the character of a phase transition in the sense that there is a free-energy-like function that is non-analytic at certain points. It may not have all characteristics of a thermodynamic phase transition.Stop [Särkkä(2013)]sarkka13 author author S. Särkkä, @nooptitle Bayesian Filtering and Smoothing (publisher Cambridge Univ. Press, year 2013)NoStop [von der Linden et al.(2014)von der Linden, Dose, and von Toussaint]von-der-Linde14 author author W. von der Linden, author V. Dose,and author U. von Toussaint, @nooptitle Bayesian Probability Theory: Applications in the Physical Sciences (publisher Cambridge Univ. Press, year 2014)NoStop [Lathouwers(2016)]lathouwers16 author author E. Lathouwers, title When memory pays: Discord in hidden Markov models, @noopMaster's thesis, school Simon Fraser University and Utrecht University (year 2016)NoStop [Parrondo et al.(2015)Parrondo, Horowitz, and Sagawa]parrondo15 author author J. M. R.Parrondo, author J. M.Horowitz,and author T. Sagawa, title title Thermodynamics of information, @noopjournal journal Nature Phys. volume 11,pages 131–139 (year 2015)NoStop [Sivak and Thomson(2014)]sivak14 author author D. A. Sivak and author M. Thomson, title title Environmental statistics and optimal regulation, @noopjournal journal PLoS Comp. Biol. volume 10,pages e1003826 (year 2014)NoStop [Rivoire and Leibler(2011)]rivoire11 author author O. Rivoire and author S. Leibler, title title The value of information for populations in varying environments, @noopjournal journal J. Stat. Phys. volume 142, pages 1124–1166 (year 2011)NoStop [Zuk et al.(2005)Zuk, Kanter, and Domany]zuk2005entropy author author O. Zuk, author I. Kanter,andauthor E. Domany, title title The entropy of a binary hidden Markov process, @noopjournal journal J. Stat. Phys volume 121, pages 343–360 (year 2005)NoStop [Allahverdyan and Galstyan(2009)]allahverdyan2009maximum author author A. Allahverdyan and author A. Galstyan, title title On maximum a posteriori estimation of hidden Markov processes, booktitle booktitle Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, @noopjournal journal Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence , pages 1–9 (year 2009)NoStop [Wu(1982)]wu82 author author F. Y. Wu, title title The Potts model, @noopjournal journal Rev. Mod. Phys. volume 54, pages 235–268 (year 1982)NoStop [Hartich et al.(2016)Hartich, Barato, and Seifert]hartich2016sensory author author D. Hartich, author A. C. Barato, and author U. Seifert,title title Sensory capacity: An information theoretical measure of the performance of a sensor, @noopjournal journal Phys. Rev. E volume 93, pages 022116 (year 2016)NoStop
http://arxiv.org/abs/1704.08719v2
{ "authors": [ "Emma Lathouwers", "John Bechhoefer" ], "categories": [ "cond-mat.stat-mech" ], "primary_category": "cond-mat.stat-mech", "published": "20170427190239", "title": "When memory pays: Discord in hidden Markov models" }
[email protected] Division of Applied Physics, Faculty of Engineering, Hokkaido University, 060-8628 Sapporo, Japan Institut Molécules et Matériaux du Mans, UMR CNRS 6283, Université du Maine, 72085 Le Mans, France Institut Molécules et Matériaux du Mans, UMR CNRS 6283, Université du Maine, 72085 Le Mans, France Division of Applied Physics, Faculty of Engineering, Hokkaido University, 060-8628 Sapporo, Japan [email protected] Laboratoire d'Acoustique de l'Université du Maine, UMR CNRS 6613, Université du Maine, Av. O. Messiaen, 72085 Le Mans, France Absorption of ultrashort laser pulses in a metallic grating deposited on a transparent sample launches coherent compression/dilatation acoustic pulses in directions of different orders of acoustic diffraction. Their propagation is detected by the delayed laser pulses, which are also diffracted by the metallic grating, through the measurement of the transient intensity change of the first order diffracted light. The obtained data contain multiple frequency components which are interpreted by considering all possible angles for the Brillouin scattering of light achieved through the multiplexing of the propagation directions of light and coherent sound by the metallic grating. The emitted acoustic field can be equivalently presented as a superposition of the plane inhomogeneous acoustic waves, which constitute an acoustic diffraction grating for the probe light. Thus, the obtained results can also be interpreted as a consequence of probe light diffraction by both metallic and acoustic gratings. The realized scheme of time-domain Brillouin scattering with metallic grating operating in reflection mode provides access to acoustic frequencies from the minimal to the maximal possible in a single experimental configuration for the directions of probe light incidence and scattered light detection. This is achieved by monitoring of the backward and forward Brillouin scattering processes in parallel. Applications include measurements of the acoustic dispersion, simultaneous determination of sound velocity and optical refractive index, and evaluation of the samples with a single direction of possible optical access.Key words: picosecond laser ultrasonics, time-resolved Brillouin scattering, inhomogeneous plane acoustic wave, acousto-optics 43.35.Sx, 78.20.hc, 78.35.+c, 81.70.Cv, 43.38.ZpTime-domain Brillouin scattering assisted by diffraction gratings Vitalyi Gusev   =================================================================§ INTRODUCTIONPicosecond acoustic interferometry (PAI) is a powerful opto-acousto-optic technique for nondestructive and non-contact testing of transparent materials at the nanoscale.<cit.> First, using an ultrashort pump laser pulse, a propagating picosecond coherent acoustic pulse (CAP) is launched into the material. Second, partial scattering of a continuously delayed in time ultrashort probe laser pulse by the launched CAP is used to monitor the propagation of this nanometers scale acoustic perturbation through the material. Weak light pulses scattered by the CAP interfere at the photodetector with the probe light pulses of significantly higher amplitude reflected from various interfaces of the sample, such as the interfaces of the tested material with air and with the optoacoustic transducer, for example. The signal of transient optical reflectivity is proportional, in leading order, to the product of the amplitudes of these two scattered light fields. Thus, a heterodyning of a weak field against a strong one is achieved. The detected signal of time-resolved optical reflectivity in this so-called pump-probe scheme contains a sinusoidal oscillating component whose physical origin is the Brillouin scattering (BS) of the probe light by the CAP. The frequency of this oscillation depends on the angle between the propagation directions of the probe light and the coherent acoustic waves. It is precisely equal to the shift in the frequency of the scattered light that would be caused by the thermal phonons propagating in the same direction as the CAP and could be resolved using optical spectrometers in classic frequency-domain BS (FDBS) experiments.<cit.>That is why the PAI is also often called time-domain Brillouin scattering (TDBS).An important limitation of TDBS in comparison with FDBS is that a significantly narrower part of the acoustic spectra is accessible by TDBS. In FDBS, by varying the angle between the directions in which probe light is incident and in which scattered light is detected, the direction of the thermal phonons for testing can be selected.<cit.> Because thermal phonons are available in all directions, this approach is very flexible providing the opportunity to significantly vary the angle between the directions of the probe light and the phonon wave vectors. Thermal phonons of highest frequency are accessible in the so-called back-scattering configuration,<cit.> when the probe light is scattered by the counter-propagating or co-propagating phonons (annihilation or creation of phonons, respectively). Thermal phonons of lowest frequency are detectable in the forward scattering configuration/geometry when the probe light and the phonons are propagating along nearly orthogonal directions, as in the so-called platelet configuration/geometry.<cit.> In TDBS the situation is very different. In common TDBS experiments the lateral dimensions of the optoacoustic generators are controlled by the size of the laser pump focus and typically significantly exceed the spatial lengths of the CAP emitted by them in the materials. Thus, the diffraction length of the emitted CAP in typical experiments significantly exceeds its attenuation length, while the direction of the CAP is quasi-perpendicular to the sample surface illuminated by the pump laser pulse, and fixed. Then the only possibility to vary the frequency of the tested phonon is to change the direction of the probe light propagation relative to the fixed direction of CAP propagation. However, most of the TDBS applications are for the diagnostics of either thin coatings/multilayers deposited on the bulk samples or thin plates/membranes with an optical access to the launched CAP only through the surfaces normal to CAP propagation direction. The maximum angle of the probe light transmitted through the air/material surface relative to the direction of CAP propagation is theoretically θ_i^max=arcsin(1/n), where n is the refractive index of the material at the probe wavelength λ, when in air the light is skimming along the surface. Thus for n>√(2), the angle is smaller than 45^∘. In the resulting backscattering-type configuration the frequency of the phonon tested by the TDBS is <cit.>f_B=2vn√(1-sin^2(θ_i))/λ=2vn√(1-sin^2(θ)/n^2)/λ,where v is the longitudinal sound velocity of the material and θ is the angle of incidence in air. Thus for n>√(2), even the lowest frequency theoretically accessible by TDBS, f_B^min, is close to the maximum one, f_B^min>f_B^max/√(2)≃ 0.71f_B^max. For more convenient but still large angle of incidence, θ=60^∘, the estimate is f_B^min>√(2/3)f_B^max≃ 0.82f_B^max.Thus, the tunability of the Brillouin frequency expressed in Eq. (<ref>) is limited by refraction at the air/sample interface.This indicates the limitations in application of the TDBS for revealing and identifying the frequency dispersion of the material properties such as, for example, sound velocity and attenuation. Another limitation of the typical TDBS scheme is that, in accordance with Eq. (<ref>), when fixing the external angle of probe incidence and measuring the Brillouin frequency (BF), f_B, we get information on the combination of two material parameters (v and n). Then to determine them independently the TDBS measurements should be conducted at least at two different angles θ,<cit.> while in FDBS there exists an experimental forward scattering configuration, which is called platelet configuration,<cit.> providing the opportunity to measure sound velocity by a single measurement without determining the optical refractive index.To get access in the TDBS experiments to larger angles of BS, including those of forward-type scattering, and, as a consequence, to broaden the spectrum of the detectable/accessible phonons and to measure the sound velocity within a single mutual orientations of the optical excitation and detection, we propose to use gratings consisting of periodically arranged pump light absorbing parallel bars. The bars could be metallic, for example, as in our experiments (Fig. <ref>). When the grating with a period p is applied for the generation of the CAPs instead of a metallic thin film, typically used for this purpose, the coherent acoustic waves will be emitted not only normally to the plane of the gratings but also in all directions corresponding to the possible diffraction of the acoustic wave by this grating, thus, multiplexing the propagation directions of CAPs in the sample. Additionally the grating can diffract both transmitted and reflected probe light, thus, multiplexing the propagation directions of the probe laser pulses inside the sample. Both these factors should potentially lead to an increase of the maximum angles between the propagation directions of the coherent sound and of the probe light, and give access for TDBS to forward-type photon scattering processes. Moreover, application of the grating should provide an opportunity to monitor simultaneously the same acoustic mode, for example longitudinal, at different frequencies.§ EARLIER EXPERIMENTS WITH METALLIC GRATING DIFFRACTING PROBE LIGHT IN TRANSMISSION MODEAfter realizing these theoretical predictions experimentally in the schema presented in Fig. <ref>, we have found that a part of them could have been confirmed just through the dedicated analysis of the experimental results published much earlier in Ref. Lin:jap1993. In this publication the pump-probe optical schema was for the first time successfully applied to reveal the elastic motions of the metallic grating deposited on the samples surface (gold rods on fused silica, the same combination of materials as in our experiments). The principle difference in comparison with our optical experiments is in conducting pumping and probing of the samples from the grating side of the sample.The experimental setup was similar to Fig. <ref> but the sample was placed upside down: both pump and probe light comes from the front side (with grating) of the sample.The title of this publication, “Study of vibrational modes of gold nanostructures by picosecond ultrasonics”, and its abstract, both emphasize the successful identification of the low-lying frequencies in the transient reflectivity spectrum with the normal modes of the nanorods coupled to the substrate. Because of this fact and the years passed we forgot that in Ref. Lin:jap1993 three additional high-lying frequencies (modes I–III in Fig. 10 of Ref. Lin:jap1993) had been detected and reproduced by the solutions of the theoretically formulated problem, although the origin of not all of them had been understood even qualitatively. The origin of mode I was identified with the backscattering-type process described in Eq. (<ref>) without any influence of the grating on it. The origin of the mode II was related with the influence of the grating on the probe light field only, i.e., without accounting for the difference in the directivity patterns of the CAPs emitted by the metallic grating and a metallic thin film. Finally the origin of mode III was not understood. Based on our proposal that grating directs both the probe light and the generated acoustic waves in different orders of the diffraction, the interpretation of the physical origin of the modes II and III is straightforward. If in the coordinate system presented in Fig. <ref> the wave vector of the probe photon incident from air on the sample is given by k_i=(k_x,0,k_z)=(k_x,0,√(k^2-k_x^2)), where k=|k|, then in transmission from air into the sample the probe field is diffracted by the grating in multiple directions defined by the wave vectors k_i=(k_x+m_iq,0,√(k_i^2-(k_x+m_iq)^2)). Here q=2π/p is the grating wave number, k_i=nk is the wave number of the probe photon in the sample, while m_i=0,±1,±2,… indicates the order of the diffraction peak. The probe light field backscattered by the phonons should have the propagation directions described by k_s=(k_x+m_sq,0,-√(k_s^2-(k_x+m_sq)^2)), m_s=0,±1,±2,…. Only the light propagating along these directions, when transmitted from the sample into the air, could be diffracted by the grating in the detection direction, given in Ref. Lin:jap1993 by k=(k_x,0,-√(k^2-k_x^2)). Note that the indexes i and s are introduced for the photon incident on the phonon and scattered by the phonon, respectively. These photons are propagative in the limited number of the diffraction orders defined by k_i,s>|k_x+m_i,sq| and evanescent in the rest. The wave vector of the acoustic phonon participating in the BS is given by the law of the momentum conservation,<cit.> k_B=±(k_s-k_i) where the plus sign corresponds to absorption of the acoustic phonon (anti-Stokes) and the minus sign to its emission (Stokes). Then modulus of the wave vector of the coherent acoustic phonon which has participated in the BS isk_B =2π f_B/v={[(m_s-m_i)q]^2+[d_s√(k_i^2-(k_x+m_sq)^2)-d_i√(k_i^2-(k_x+m_iq)^2)]^2}^1/2,Here the difference in the frequencies of the incident and scattered photons is neglected, k_s=k_i, as usually, while the parameters d_s,i=±1 are introduced by us to account for the variety of possible directions of incident and scattered light, in the general case. In the experiments in Ref. Lin:jap1993, d_i=-d_s=1 corresponds to monitoring of the backscattered light only, because the forward scattered light does not return to the detection region. The proposed Eq. (<ref>) provides the frequency of the mode I as in Eq. (<ref>) when m_s=m_i=0. The proposed Eq. <ref> reproduces Eq. (14) from Ref. Lin:jap1993 when m_s=m_i=m and, thus, reveals the physical sense of the parameter m in Ref. Lin:jap1993. The derived condition also confirms the suggestion in Ref. Lin:jap1993 that in the mode II the light is scattered by the plane acoustic front propagating in the direction normal to the surface. In fact, the projection of the phonon wave vector along the surface (along the x axis) is proportional to the difference between the projections of the photons wave vectors, (m_s-m_i)q, and, thus, for the revealed m_s=m_i=m it is equal to zero. The phonon scattering light in mode II propagates normally to the surface.Moreover the experimentally observed mode II corresponds to |m|=1, (m)=-(k_x). So the origin of the mode II is the scattering by the CAP of the light directed by the metallic grating in such diffraction order whose direction is the closest to the CAP propagation direction. Finally, Eq. (<ref>) attributes the origin of the mode III to the following two degenerate in frequency processes. In the first one the metallic grating directs the incident light like in mode II, i.e. in the diffraction order closest to the surface normal, but then the coherent acoustic wave scatters light in such direction towards the front surface of the sample, from which it can be detected without additional diffraction (|m_i|=1, (m_i)=-(k_x), m_s=0). In the second one the acoustic wave backscatters non-diffracted (zeroth order) probe light in the diffraction order of light closest to the surface normal, from which it is returned to the detection direction by the optical grating in transmission from the sample into the air (m_i=0, |m_s|=1, (m_s)=-(k_x)). In both these processes the acoustic waves are not just reflecting the incident light like the mirrors but are modifying the direction of the scattered light relative to one predicted by the Snell's law. One can say that the acoustics waves are diffracting the incident light and are functioning as diffracting gratings with the wave number q. It is natural in the following to use for the acoustic field, generated in the sample by pump laser pulses incident on the metallic grating, the term acoustic grating because it is periodic along the x axis. Note, that this terminology was used already for example for the description of the acoustic waves generated by laser induced gratings, i.e., by the intensity interference patterns that can be created by two light beams incident on the sample surface at angles θ and -θ.<cit.> It was demonstrated that the acoustic field emitted by the laser gratings can be decomposed into the so-called plane inhomogeneous acoustic modes,<cit.> i.e., the acoustic waves with plane phase fronts parallel to the laser-intensity grating but modulated in amplitude with the pattern of the laser intensity grating. Thus these modes can be naturally called acoustic gratings. In the experiments in Ref. Lin:jap1993 and in our experiments presented below in Section III (Fig. <ref>) acoustic gratings are generated by the metallic gratings. Their ability not just to reflect/transmit but to diffract the incident light is due to their amplitude periodic modulation. Using the suggested terminology all the high-lying frequency modes detected in Ref. Lin:jap1993 can be explained by processes involving only two diffractions of probe light by the gratings, i.e., either two diffractions by metallic grating (mode II) or one diffraction by the metallic grating plus one diffraction by the acoustic grating (mode III). Although in these experiments only the backscattering-type processes were observed and only those involving first diffraction orders, they are in favor of our proposal formulated in the Introduction that in the experiments with gratings multiple frequencies corresponding to BS processes with different angles between sound and light propagation directions can be detected simultaneously.§ EXPERIMENTS WITH METALLIC GRATING DIFFRACTING PROBE LIGHT IN REFLECTION MODE In our TDBS experiments conducted from the opposite side of the sample (Fig. <ref>) than in Ref. Lin:jap1993, we have additionally detected the BFs corresponding to forward scattering processes, to the processes involving three diffractions of probe light (two by metallic grating and one by acoustic grating) and also to the processes involving light from the higher diffraction orders than the first order. The gold gratings are made by the electron beam lithography and lift off technique on a fused silica substrate of thickness 1 mm. Insert in Fig. <ref> shows a schematic structure of the sample. Two samples with different nominal grating periods, p=587.0 nm and 479.2 nm, are prepared. We refer the former sample as “45deg”, whereas the latter as “60deg”, because they are prepared to have the first order diffraction peaks near these directions for the reflected probe light incident normally at the grating. The designed width of each rod is half the period, but the actual width is somewhat larger. The thickness of the gold rods is about 50 nm. A 2 nm thick Cr layer is formed between the SiO_2 substrate and the Au film to improve the adhesion.A standard laser picosecond setup is used. A mode-locked Ti:sapphire laser with a regenerative amplifier is used as the light source. The pulse width is ∼100 fs and the repetition frequency is 260 kHz. The fundamental light pulses with the central wavelength 800 nm are focused to the grating structure from the back side (the side without grating) of the sample (Fig. <ref>). The pulse energy is 80 nJ/pulse and the diameter of the focused region is 100 μm, covering nearly completely the rectangular grating area, which lateral dimensions are similar (100 μm by 100 μm). The absorption of pump laser pulses in the metallic grating generates acoustics waves propagating in the sample along different orders of acoustic diffraction, as it is described in the Introduction. Alternatively the launched acoustic field could be viewed as inhomogeneous plane waves or acoustic grating propagating normally to the sample surface.<cit.> The acoustic grating, propagating normal to the metallic grating is the result of the interference of the acoustic waves propagating along positive and negative orders of the acoustic diffraction.The frequency doubled light pulses with the central wavelength 400 nm are focused to the grating structure from the back side with the normal incidence. The pulse energy is 4 nJ/pulse and the diameter of the focused region is 10 μm. The probe light scattered by the complete structure in the first order of diffraction is fed in the photo detector to reveal the modulation of the light intensity caused by the acoustic waves. (Fig. <ref>). The polarization of the incident probe light is chosen as in parallel with the grating period (x axis, E∥x) or in parallel with the gold bars (y axis, E∥y). Figure <ref> shows the raw data of the transient variation of the intensity of the diffracted light for 45deg and 60deg samples with two probe polarizations E∥x and E∥y. The delay time is the time between the pump and probe light pulse arrival to the sample.Each transient curve consists of two contributions: a slow oscillating components with a period around 200 ps, and a fast oscillating components with a period around 20 ps and resembles the curves presented in Fig. 2 of Ref. Lin:jap1993. This indicates a possible splitting of the photo-induced motion of the sample into low-lying frequency modes and high-lying frequency modes as in Ref. Lin:jap1993. To get further understanding, the obtained temporal signal is Fourier transformed with respect to the delay time. Figure <ref> shows the norm of Fourier amplitude as a function of frequency.In the low-lying part of the spectrum below 8 GHz, where the amplitudes in Fig. <ref> are intentionally attenuated, we detect, similar to Ref. Lin:jap1993, up to three different modes. Following Ref. Lin:jap1993 we attribute them to the oscillations of the gold rods on the sample surface. Although these oscillations are beyond of our interest here, it was straightforward to associate the strongest of the low-lying modes (at 3.8 GHz and 5.0 GHz in the 45deg and 60deg samples, respectively) with the L (longitudinal) mode defined in Ref. Lin:jap1993. This was achieved using the results of numerical calculations of the resonance frequency of the rods with different widths and thicknesses presented in Fig. 8 of Ref. Lin:jap1993 and accounting for the fact that frequency of the strongest mode in our experiments scales approximately inverse proportional to its width. High-lying parts of the observed spectra in our experiments are always containing larger number of frequency peaks in comparison with Ref. Lin:jap1993. For example in 45deg sample in case of E∥y probe polarization the number of the frequencies that can be identified is twelve (see Fig. <ref> (b) and Table <ref>), i.e., four times larger than that in Ref. Lin:jap1993. All the experimentally observed modes in the high-lying part of the spectrum can be attributed to particular BS processes theoretically contributing to Eq. (<ref>), when accounting for the differences between our experimental configuration and samples and those in Ref. Lin:jap1993. The first and the most important advantage of the optical probing the sample from the back side (Fig. <ref>) consists in the additional opportunity to monitor forward-type BS processes by the TDBS. This opportunity is related to the fact that in our experiments the metallic grating acts on the probe light in the reflection mode while in the optical schema of Ref. Lin:jap1993, i.e., when the sample in Fig. <ref> is placed upside down and probed from the front side, it acts on the probe light in the transmission mode only. This can be qualitatively understood considering four types of the probe light scattering sequences which are contributing to the reflection of light by the sample in the direction of the detector. Accounting for the fact that the scattering of light by the acoustic grating is much weaker than by the metallic one, only the sequences with a single scattering by the acoustic grating should be considered. The first and the simplest sequence is the diffraction of the probe light normally incident on the acoustic grating in reflection/backscattering mode towards the detector. This process corresponds to k_i=0, d_s=-d_i=-1, m_i=0, m_s=1 in Eq. (<ref>). We remind here that the parameters m_i,s numerate the diffraction orders, while d_i,s fix the direction of the probe light propagation along the z axis. The detection direction is d_s=-1, m_s=1 in our experiments. The second sequence including backscattering of light by the acoustic grating includes: a) the transmission of the probe light through the acoustic grating without diffraction, b) reflection or diffraction of light by metallic grating in the different orders of the diffraction, c) reflection/backscattering of light by the acoustic grating in the different orders of the diffraction (the process with k_i=0, d_s=-d_i=1 and arbitrary m_i,s in Eq. (<ref>)), d) reflection or diffraction of the light by the metallic grating in the direction of its detection. Two other sequences include forward scattering of light by the acoustic grating ((d_s)=(d_i)). The third sequence consists in: a) forward scattering of the probe light normally incident on the acoustic grating in the different orders of the diffraction (the process with k_i=0, d_s=d_i=1, m_i=0 and arbitrary m_s in Eq. (<ref>)) and b) reflection or diffraction of the light by the metallic grating in the direction of its detection. The fourth sequence includes: a) the transmission of the probe light through the acoustic grating without scattering, b) reflection or diffraction of light by metallic grating in the different orders of the diffraction and c) forward scattering/transmission of the light by the acoustic grating in the direction of the first order of the diffraction (the process with k_i=0, d_s=d_i=-1, arbitrary m_i and m_s=1 in Eq. (<ref>)). It is worth noting here that in our 1 mm thick samples in order to be detected the light scattered from region of the acousto-optic interaction towards the back surface should propagate in the direction of the detection. In fact, because of the large thickness of the sample in comparison with the lateral dimension of the metallic and acoustic gratings, the light scattered in the other diffraction orders after the reflection from the back surface does not incident on the metallic and acoustic gratings and, thus, it cannot be later redirected by them to the detector.We have found that for the identification of all BFs detected in all four conducted experiments it is sufficient to account only for the propagating incident and scattered light fields in the processes defined by Eq. (2), although only the light directed to the detector should be obligatory propagating (because of the large distance between the region of its diffractions/scatterings and the back surface of the sample). Although in the second of the above described sequences both incident and scattered light fields could be potentially evanescent, while in the third and the forth sequences the evanescent could be the scattered and the incident light, respectively, it was sufficient for us to account (in addition to the zeroth order of the diffraction) only for the first and the second orders of the diffracted light in the 45deg sample and for the first order of the diffracted light in the 60deg sample. With the refractive index of the fused silica in our sample n=1.47 the light in all other diffraction orders is evanescent (k_i,s<|k_x+m_i,sq|). Thus it was possible to explain all the observed BFs only by the BS processes with the momentum conservation diagrams presented in Fig. <ref>. In this Figure and later on the state of the incident photon (left column) is identified by (m_i,d_i), while the state of the scattered photon (upper row) by (m_s,d_s). Blue, green and red arrows are presenting the wave vectors of the incident photon, scattered photon and the phonon, respectively.The wave vector diagrams for the Brillouin scattering processes, corresponding to the four probe light transmission/reflection sequences described above, are presented in the upper left, lower right, upper-right and lower-left parts of Fig. <ref>, respectively. They are separated by continuous black lines. Equation (<ref>) (for k_x=0) provided opportunity to calculate the frequencies of all scattering processes depictured in Fig. <ref>. The calculated BFs are presented in Table <ref> together with the experimentally registered. For completeness we listed in the left column in Table <ref> all the cases of degeneracy, i.e., when different scattering configurations in space actually correspond to the same angle between the propagation directions of the incident photon and the phonon. The notation “NA” in Table <ref> marks the processes involving the photons from the second diffraction order in the 60deg sample, which are evanescent. It was not required to account for these processes to explain the totality of our experimental observations.The theoretical estimates for Table <ref> were done with the following parameters of fused silica n=1.47 and v=5968 m/s.<cit.> The calculated values are also marked with + symbol in Fig. <ref>. The correspondence between the theoretically predicted and the measured Brillouin frequencies is remarkable.§ DISCUSSION We have already noted above that the larger number of Brillouin frequencies accessible in our experiments in comparison with those in Ref. Lin:jap1993 is due to the ability to monitor forward scattering processes in the configuration presented in Fig. 1. Another reason is in the larger number of the diffraction orders with propagating probe light in our 45deg sample. We have checked that in Ref. Lin:jap1993 in the samples with 400 nm and 600 nm periodicity of the metallic grating only the photons in the zeroth and the first diffraction orders could be propagative. In our 45deg sample the photons are propagative additionally in the second diffraction order importantly increasing the number of the efficient BS configurations as it could be appreciated from the lower part of the Table <ref>, where the theoretical interpretation of the detected BFs necessities the participation in the BS the photons from the second diffraction order. The fact, the photons in the second diffraction order are propagating in 45deg sample and are evanescent in the 60deg samples also explains our experimental observations that larger number of the BFs were detected in the first of this samples (see Fig. <ref> and Table <ref>).It was expected that the propagation of the probe light in the higher diffraction orders would require the participation in the BS processes of the phonons launched in the higher orders of diffraction, i.e., higher orders of light diffraction by the acoustic grating, in comparison with the earlier experiments. This is confirmed by our experimental results presented in Table <ref> and explained by the diagrams in Fig. <ref>. Only in one of the processes, (-1,-1)→(1,-1), involving the first diffraction orders of light (dominating in the upper part of the Table <ref> above the thick horizontal line in its center) the participation of the phonon of the second diffraction order, i.e., with the projection of the phonon wave vector on the x axis equal to |m_s-m_i|q=2q, is required, for explaining the particular experimentally detected frequency. Note, that two other processes in the upper part of the Table <ref>, (±1,-1)→(∓1,1) and (±2,-1)→(∓2,1), due to phonons emitted by the optoacoustic generator in the second and the fourth diffraction orders, respectively, are possible. However, these processes are degenerate in BF with the process (0,-1)→(0,1), taking place without diffraction but just through the reflection of light by the plane compression/dilatation acoustic wave. The latter process takes place without diffracted acoustic waves and is dominating over two other processes contributing to the same BF. Consequently, accounting for the processes (±1,-1)→(∓1,1) and (±2,-1)→(∓2,1) is not necessary for the explanation of our experimental observations. They are presented in Table <ref> only for the sake of completeness. Thus, the experimental BFs from the upper part of Table <ref> can be explained by the processes involving photons of the zeroth and the first diffraction orders only. At the same time the participation of the phonons from the second and even from the third diffraction orders (see for example (-2,-1)→(1,-1)) is largely required in the processes with the photons from the second diffraction order in the lower part of Table <ref>.To explain all the available experimental data, we do need to account for the phonon which is neither propagating nor decaying along the z axis. The experimental frequencies of 20.6 GHz and 25.0 GHz, observed only in the experiment with E∥y polarization of probe light in 45deg and 60 deg samples, respectively, can be currently associated only with the scattering of light by longitudinal phonons skimming along the sample surface. The corresponding scattering process is (-1,-1)→(1,-1).The observation of the BFs only when using probe light polarized along the metallic rods, i.e., E∥y, is rather common to our experiments (see Fig. <ref> and Table <ref>). We attribute these observations to higher reflection/diffraction of E∥y polarized light in comparison with the light polarized along the direction of grating periodicity, i.e., E∥x, both by metallic and acoustic gratings. On the one hand, this hypothesis is consistent with the known applications of metal gratings as birefringent light polarizers.<cit.> On the other hand, the scattering of E∥x polarized light may have lower efficiency than that of E∥y when the propagation direction of the scattered light is nearly perpendicular to the direction of the polarization which is induced by the incident light and the acoustic waves.<cit.>Our experimental results confirm that the application of the diffraction grating in the TDBS experiments provides opportunity to overcome some limitations of the TDBS technique discussed in the Introduction. First the proposed experimental scheme provides opportunity to detect simultaneously multiple Brillouin frequencies from the highest possible BF in the backscattering configuration, f_B^max=43.38 GHz, to the lowest possible BF in the forward scattering configuration, f_B^min=10.6 GHz (see the first and the last line in the upper part of Table <ref>). This significantly broadens the frequency band of the TDBS from f_B^min≈ 0.7f_B^max (see Introduction) to f_B^min≈ 0.25f_B^max. The highest BF detected in our experiments does not depend on the grating, while the lowest detected frequency is controlled by the grating period as it can be appreciated from the diagrams in Fig. <ref> of the several degenerate processes providing access to this frequency. It could be additionally diminished by the increasing the period p of the grating. The comparison of the lowest BFs detected in our two samples (first line in Table <ref>) supports this expectation. Thus the frequency range accessed by TDBS could be additionally broadened by the dedicated preparation of the grating.Our experiments demonstrate that the TDBS measurement in a single experimental configuration with diffraction grating, i.e., without any modification of the directions for optical pumping and probing of the sample, is sufficient to extract refractive index, n, and sound velocity, v, of the material. For this purpose any two of the experimentally detected frequencies, which in Eq. (<ref>) depend both on n and v, could be used. It is also possible to determine the refractive index and sound speed by fitting simultaneously larger number the measured frequencies in order to increase statistically the reliability in their determination, if required.<cit.> Moreover the detection of the BF corresponding to the forward scattering process of light by the skimming longitudinal wave, (-1,-1)→(1,-1), provides opportunity to determine sound velocity without knowledge of the refractive index. In this process k_B=2π f_B/v=2q (see Fig. <ref>) and the determination of the sound velocity requires just the knowledge of the grating period, v=pf_B/2. If our experiments were accomplished differently, with probe light incident on the sample at angle θ to the sample normal and the scattered light detected in the direction of mirror-type reflection, i.e., as in Ref. Lin:jap1993 but from the backside of the sample, the processes of forward scattering of probe light by the skimming longitudinal wave would be still accessible by the TDBS. For example, if the angle θ is chosen such that k_x=ksinθ=-mq/2, then the acoustic grating composed of acoustic waves skimming along the surface could scatter the probe light incident on it from the back surface of the sample forward in the “symmetrically” propagating light with k_x=mq/2 by transmitting it in the m-th diffraction order of the grating. Later in the reflection from the metallic grating this probe light could be directed to the detector. Thus the momentum conservation law for scattering of the probe light by the phonons skimming along the surface reads k_B=|m|q=2|k_x|. The first of these equalities can be used to evaluate the sound velocity from the measured BF without knowledge of the optical refractive index. However, there is still a drastic difference in this approach to evaluate sound velocity by TDBS with how it could be accomplished in FDBS. In FDBS for any angle 2θ between the direction of probe incidence and probe detection the thermal phonon with the wave vector necessary for the BS exists. For example, in the platelet geometry of the FDBS, which has the same momentum conservation diagram as the geometry providing access to skimming phonons by the TDBS and also provides opportunity to measure sound velocity without knowledge of the optical refraction angle, the thermal phonons with the required k_B exist for any 2θ. In the TDBS the coherent skimming phonons can be generated only with wave vectors, which are multiples of the metallic grating wave number, q. Thus, to access these phonons the angle of incident should be chosen to satisfy the equality sinθ=mq/(2k). In both of our samples such measurements could be potentially accomplished only at two different angles of light incidence.Finally, it is worth mentioning that in our experiments the number of the detected forward scattering processes is smaller than the backward scattering ones (see Fig. <ref>). This is related to the fact that our scheme allows all possible forward scattering processes to be accessed in the second sequence of the probe light scattering from those described earlier. In this sequence in the acousto-optic interaction both the light incident on the acoustic grating and the scattered probe light can be of the arbitrary diffractions orders, while in the third and the fourth scattering sequences for the detection of the forward scattering processes only one of these light fields could be of an arbitrary diffraction order. This asymmetry in the TDBS scheme could be potentially corrected in the thinner experimental samples with two diffraction gratings deposited on the opposite sides when multiple reflections of the probe light between the surfaces of the sample are becoming important while the second grating (deposited on the back surface of the sample in Fig. <ref>) could, in particular, transmit in the detection direction in air the probe light incident on it from the sample side along an arbitrary order of diffraction. With the diffraction gratings on both sides of the sample all BS processes possible in the sample, both for forward and backward scattering, could potentially involve incident and scattered light in arbitrary diffraction orders.§ CONCLUSIONS We have performed picosecond ultrasonic interferometry (time-domain Brillouin scattering (TDBS)) measurements in transparent samples with metallic gratings. The pump light pulse absorbed in the metallic grating structure generates acoustic gratings (inhomogeneous plane compression/dilatation acoustic waves) in the substrate. The propagation of the acoustic waves is monitored by delayed probe light pulses. By detecting the modulation of the probe light intensity in the first order diffracted beam, we observed in the time domain Brillouin oscillations with rich frequency spectra. The obtained results are explained by a theoretical model which takes into account all possible configurations of probe light scattering/diffraction by these acoustic gratings, including those where the light itself is diffracted by the metallic grating either before its scattering by the phonons or after this scattering, or both. The agreement between the experimental positions of the Brillouin frequencies and the calculated ones is excellent.The theory revealed two reasons for the increased number of the BS processes that could be monitored in the scheme of TDBS proposed by us in comparison with the earlier reported experiments with metal gratings.<cit.> Our scheme provided for the first time access by TDBS to the forward scattering processes of light by the coherent sound, while one of our gratings provided propagating probe light in higher diffractions orders than in Ref. Lin:jap1993. Access to forward scattering processes importantly broadens the range of frequencies accessible by the TDBS. This fact in combination with the opportunities to monitor multiple different BS processes/frequencies simultaneously would be advantageous in studying dispersion of sound velocity and sound attenuation in the materials. It is worth noting here that the applications of usual TDBS schemes for studying acoustic wave attenuation are documented for a variety of the media.<cit.> The opportunity to monitor in a single measurement the acoustic phonons propagating in different directions could be attractive for revealing the elastic/inelastic anisotropy of the materials, including one that could be caused by non-isotropic loading or by the residual stress. For the studies of the anisotropy it is also extremely advantageous that gratings, as demonstrated by our experiments, can simultaneously launch phonons, detectable by TDBS, in the complete diapason of the angles, i.e., from 0 degrees to 90 degrees relative to their surface. Advantageous in certain practical applications with limited optical access to the samples<cit.> would be provided by the opportunity to monitor by the TDBS with grating multiple BS processes/frequencies even with incident and the detected/scattered light propagating collinearly (for example in the case of the probe light incident normally on the grating and the detection of either reflected or transmitted light in the directions also normal to the grating surfaces).Finally some of the functionalities of the proposed scheme could be achieved by replacing the metal gratings by laser-induced gratings that could be generated by the interference pattern of two light beams propagating at an angle. Such gratings can launch acoustic gratings in the sample<cit.> and also to diffract probe light,<cit.> although much less efficiently than the metallic ones. The advantage of laser gratings is in the perspective of the non-contact and non-invasive diagnostics of the samples by TDBS. The drawback is in more technically elaborated optical scheme.The reported research was conducted in the frame of the project PLUSDIL supported by ANR, under contract ANR-12-BS09-0031.O.M. is partially supported by the Acoustic HUB of Région des Pays de La Loire in France, by a Grant-in-Aid for Scientific Research from Japan Society for the Promotion of Science, and by a research grant from the Murata Science Foundation.We are grateful to Prof. Humphrey J. Maris for the comments.We would like to thank OPEN FACILITY (Hokkaido University Sousei Hall) for the sample fabrication.
http://arxiv.org/abs/1704.08069v1
{ "authors": [ "Osamu Matsuda", "Thomas Pezeril", "Ievgeniia Chaban", "Kentaro Fujita", "Vitalyi Gusev" ], "categories": [ "physics.app-ph", "cond-mat.mtrl-sci", "physics.optics" ], "primary_category": "physics.app-ph", "published": "20170426115845", "title": "Time-domain Brillouin scattering assisted by diffraction gratings" }
theoremTheorem[section] definitionDefinition[section] proposition[theorem]Proposition corollary[theorem]Corollary lemma[theorem]Lemma conjecture[theorem]Conjecture.pdf,.png,.jpgThe utility of aBayesian analysis of complex models and the study of archeological 1̧4 data.Yair Caro Department of MathematicsUniversity of Haifa-Oranim Israel JosefLauriDepartment of Mathematics University of Malta Malta Christina ZarbDepartment of MathematicsUniversity of MaltaMaltaDecember 30, 2023 ===========================================================================================================================================================================================================================Given a graph G, we would like to find (if it exists)the largest induced subgraph H in which there are at least k vertices realizing the maximum degree of H.This problem was first posed by Caro and Yuster. They proved, for example, thatfor every graph G on n vertices we can guarantee, for k = 2,such an induced subgraph Hby deleting at most 2√(n) vertices, but the question if 2√(n)is bestpossible remains open. Among the results obtained in this paper we prove that:* For every graph G on n ≥ 4vertices we can delete at most⌈- 3 + √( 8n- 15)/2 ⌉verticesto get an induced subgraph H with at least two vertices realizing Δ(H), and this bound is sharp, solving the problems left open by Caro and Yuster. * For every graph G with maximumdegree Δ≥ 1 we can delete at most ⌈ -3 + √(8Δ +1)/2 ⌉vertices to get an induced subgraph H with at least two vertices realizing Δ(H), and this bound is sharp.* Every graph G with Δ(G) ≤ 2and least 2k- 1 vertices (respectively2k - 2vertices if k is even) contains an induced subgraph H in which at least k vertices realise Δ(H),and these bound are sharp.§ INTRODUCTION A well-known elementary exercise in graph theory states that every (simple) graph on at least two vertices has two vertices with the same degree. Motivated by this fact, Caro and West <cit.> formally defined the repetition number of a graph G, rep(G), to be the maximum multiplicity in the list (degree sequence) of the vertex degrees.Various research was done concerning the repetition number or repetitions in the degree sequence. Here we mention some of these directions. * The connectionbetween the independence number andK_r-free graphs with given repetition number <cit.>.* Hypergraph irregularity - the existence ofr-uniform hypergraphs (r ≥ 3)with no repeated degrees <cit.>.* Ramsey type problems with repeated degrees<cit.> and <cit.>. * Regular independent sets— vertices of the same degree forming an independent set <cit.>.*Forcing k-repetition anywhere in the degree sequence <cit.>*Forcing k-repetition of the maximum degree <cit.>.In this paper we shall focus on the following problem first stated in <cit.>. For a graph G and an integer k ≥ 2let f_k(G)denote the minimum number of vertices we have to delete from G in order to get an induced subgraph H in which there are at least k vertices that attain the maximum degree Δ(H), of H,or otherwise |H| < k, whereasusual,following the notation of<cit.>,|G| = nis the number of vertices of G, Δ(G) is the maximum degree of G and a vertex of degree t is called a t-vertex. In the case k = 2we use the abbreviation f(G) instead of f_2(G).We define f(n,k) = max{ f_k(G): |G| ≤ n}and g(Δ,k) = max{ f_k(G) :Δ(G) ≤Δ}.Clearly there are graphs in which we cannot equate k degrees let alone k maximum degrees. A simple example is the star K_1,k-1for k ≥ 3 having k vertices and by definition f_ j(K_1,k-1) =1 for j ≥2. However, it is trivialthat in every graph G on at least R(k,k) vertices (where R(k,k) is the diagonal Ramsey number), we can equate k maximum degrees. We call a graph G in which(by deleting vertices) we can equate k maximum degrees a k-feasible graph.Soof interest is the following functionh(Δ,k)= max{ |G| : Δ(G) ≤Δ}. Caro and Yuster <cit.> conjecturedthatfor every k ≥ 2there exists a constantc(k) such thatf(n,k) ≤ c(k) √(n) and proved the conjecturefor k = 2 withc(2) =2 and k =3 with c(3) = 43.For k ≥ 4 the conjecture is still open.The question whetherc(2) = 2and c(3)=43are best possible also remainsopen.Our main purpose in this paper is to show:* f(G)can be computed exactly in polynomial time O(n^2). * forΔ≥ 1, g(Δ,2)) ≤⌈ ( - 3 +√(8 Δ+1)/2⌉ and this boundis sharp.* for n ≥ 4, f(n,2)≤⌈ ( - 3 +√(8n-15)/2⌉ and this boundis sharp. Hence in particular f(2) = √(2),solving the problem left open in <cit.>.* for a forest Fon n vertices, f_k(F) ≤ (2k-1)n^1/3.* g(1,k) = ⌊ k-1/2⌋,g(2,k) = k-1, thus determining exactly g(Δ,k)for Δ = 1,2.* h(0,k) = k-1,h(1,k) =⌊ k/2⌋ + 2 ⌊k-1/2⌋,h(2,k) = 2k-2for odd k ≥ 3,h(2,k) = 2k-3 for even k ≥ 2. The paper is organized as follows : In section 2we cover the complexity issue of computing f(G),as well as the sharp upper-bounds for g(Δ,2) and f(n,2).In section 3we consider upper-bounds for f(F)and f_k(F)where F is a forest.Insection 4 we prove exact results about g(Δ,k) and h(Δ,k)for Δ= 0,1,2.Finally, in section 5we shall collect open problems and conjectures that deserve further exploration.§ DETERMINATION OF EXACT UPPER BOUNDS FOR F(G) IN TERMS OF Δ(G) AND |G| = N. We first needa definition and two lemmas: We callB⊂ V(G), a set of vertices in a graph G,a 2-equating setif in the induced subgraph H on V(G) \ B, there are at least two vertices that realise Δ(H).We say that B is a 2-equating setwhich realises f(G) if B has theminimum cardinality among all 2-equating sets of G.Let the degree sequence of the graph G on n vertices be Δ= d_1 ≥ d_2 ≥ d_3 ≥…≥ d_n= δ so that Δ is the maximum degree and δ the minimum degree.We define (G)=d_1-d_2. Let G be a graph on n≥ 2 verticeswithdegree sequence d_1 ≥ d_2 ≥…≥ d_n,with (v) = d_1 and (u) = d_2. Then f(G)≤d_1 - d_2=(G). If d_1 = d_2 then clearly f(G) = 0.So let(G)=d_1 -d _2 ≥ 1. But then there is at least one set B of neighbours of v of size (G), none of which are adjacent to u, and clearlyf(G) ≤ |B| = (G).Let G be a graph on n ≥ 2 vertices,with degree sequence d_1 ≥ d_2 ≥…≥ d_n, with (v) = d_1. Then either f(G) = (G),orvmust be in every minimal 2-equating set of G.Suppose f(G)is not realised by (G) ( which includes the case d_1 = d_2). Then f(G)< (G), and we may assume that v is the unique vertex in G of degree d_1. Let f(G) be realised by some induced subgraph H,and B =V(G) - V(H)is a minimum 2-equating set for G. Assume for the contrary that v ∈ H (thenv is not a member of B). Since v ∈ Hbut f(G) is not realised by (G)then either v is not of maximum degree in Hin which case at least(G) +1 vertices among the neighbours of v must be deletedcontradicting|B| = f(G)< (G), orv is of maximum degree in H and still at least (G)among its neighbours must be deleted and againf(G) = |B| ≥(G) contradicting f(G) < (G). Let G be a graph on at least n ≥ 2 vertices.Suppose thatf(G) ≠(G),thenf(G) = 1 + f(G -v), wherev is the single vertex of maximum degree in G. Since f(G) ≠(G)it followsthatthere is a singlevertexv of maximum degree in Gand also from Lemma <ref> we infer that v must be in any minimal 2-equating set of G. LetG_1 = G \{ v}and let B be a minimal 2-equating set for G_1, namelyf(G_1)= |B|. Then clearlyB ∪{ v}is a 2-equating set for G hence f(G) ≤ 1+ f(G_1). On the other hand let B be a minimum 2-equating set for G. Then by assumption and Lemma <ref> v∈ B.Set B_1 =B \{ v }.ClearlyB_1 is a 2-equating set of G_1 hence f(G_1) ≤ |B_1| = |B| - 1 =f(G) - 1 which givesf(G_1) +1 ≤ f(G). Hence combiningboth inequalitieswe get f(G) = f(G \{ v} )+1.Let G be a graph on n ≥ 2 vertices,then f(G)= min{(G_j)+j: j=0 … n-2 }, whereG_j+1isobtained fromG_ j by deletingthe vertex v_1,j of the maximum degree d_1,j from G_ j (where G_0 is taken to be G), and d_2,j is the second largest degree in G_ j.Moreover f(G)can be determined in time O(n^2). By Lemma <ref>,either f(G)= (G)or v_1,0must be deleted to obtain G_1 and in this case by Lemma <ref>,f(G) = f(G_1) +1. Hence f(G) =min{(G), f(G_1) +1}. Nowagain either f(G_1) =(G_1),orby Lemma <ref> and Lemma <ref> the maximum degree in G_1 must be deleted to obtain G_2 and thenf(G_1) = f(G_2) +1. Hencef(G) = min{(G),(G_1) +1, f(G_2 )+2 }.We continue this processuntilfor some first j,(G_j) = 0 and there we stop having two vertices realizing the maximum degree of G_ j ( the later steps will always give a larger value then (G_j) +j =j). Each step is forced by Lemma <ref> and Lemma <ref> , hence f(G)= min{(G_j)+j: j=0 … n-2 }.Now in each iteration we have to construct G_ j from G_ j-1by deleting the maximum degree v_1,j-1from G_ j-1 and compute d_1,j and d_2,jwhich can be done in O(n) time running over the new degree sequence of G_ j that can be computed from the degree sequence of G_j-1 by updating d_1,j values in it .So the total running time for the algorithm is O(n^2 + e(G) )= O(n^2 ). Let G be a graph on n ≥ 2 verticeswith maximum degree Δ, and t ≥ 1 be an integer.* If 0 ≤Δ≤ 1, then f(G)=0.* If t +12 +1 ≤Δ≤t +22, thenf(G) ≤t, and this bound is sharp for every Δ in the range.* For Δ≥ 1, f(G) ≤⌈-3+√(8Δ+1)/2⌉.Clearly, if 0 ≤Δ≤ 1 and n ≥ 2, f(G)=0.For (ii), we use induction on t.For t = 1, 2 ≤Δ≤ 3.If there is one vertexv of degree Δ=2.Removing v clearlyleaves at least two vertices of maximum degree equal to one or zero, and hence f(G)=1.If there is a vertex v of degree Δ=3 and a vertex u of degree 2, then by Lemma <ref>, f(G) ≤ 3-2=1.Otherwise, all other vertices have degree 0 or 1 and deleting v leaves at least two vertices of maximum degree equal to one or zero, and f(G)=1.So assume statement is true for t -1 and we shall prove it is true for t. By assumption,t +12 +1 ≤Δ≤t +22. Let v be a vertex of maximum degree and u a vertex with the second largest degree —clearly (u) ≤(v) ≤t +22. Now consider (u).* if(u) ≤t+12 we drop v to get G \{ v } = Hwhere Δ(H)≤t+12, and by inductionf(G) ≤ f(H) +1 ≤ t - 1 +1= tand we are done.* if (u) ≥t+12+1then clearly (G) = (v) - (u) ≤t+22-t+12- 1=t , hence by Theorem <ref> f(G)≤(G) ≤ tand we are done. Sharpness: consider the sequencea_ j =j +12+1i.e.a_1 = 2,a_2 = 4,a_3 = 7etc. and letΔ= t +12+ j,j = 1, …, t +1.For example, if t = 4, Δ= 11,12,13,14,15. Consider the graphG_Δconsisting of the starsK_1,a_j for j = 1, …, t-1and a “big star"K_1,Δ. Suppose for example Δ= 13 , which is the case t = 4, since t+12=52 < 13< 62=t+22. The sequence of starswe choose involves a_1 ,a_2 ,a_3and Δ, that is K_1,2∪K_1,4∪K_1,7∪K_1,13, and this realises f(G) = 4 as required.The validity of this construction is a simple application of Theorem <ref>. Sothis construction shows the boundis sharp for every Δ≥1.For (iii),from part (ii) above (t ≥ 1 and Δ≥ 2), we gett^2 + 3t +2 -2Δ≤ 0. Solving the quadratic and rounding up, since t must be an integer, we get f(G) ≤ t = ⌈-3 + √( 1 +8Δ)/2⌉,which holds true also for the case Δ=1.Let G be a graph on n ≥ 4, and t ≥ 1an integer such thatt+12+3 ≤ n ≤ t + 22 +2. Then f(G) ≤ t, and this is sharpfor all values of n in the range. Also, for n ≥ 4,f(G) ≤⌈- 3+√(8n-15)/2⌉. Observe thatfor t+12+3 ≤ n ≤ t + 22 +1it follows that if |G|= n then Δ(G)≤ n-1 hencet + 12+2 ≤Δ(G) ≤t +22 and f(G) ≤ t by Theorem <ref>Wenow construct for every n,such that t +12+3 ≤n ≤t +22 +1,a graph G_n=G with f(G) = t proving sharpness. Let n = t +12 + j: j =3,…, t +2. Let A = {v_1,v_2,…, v_t}and B = {u_1,…, u_n-t}.Vertex v_t is adjacent to all other vertices so that (v_t) = n-1 =t +12+ j -1.Vertex v_ q, for q = t-1,…,1 has degree (v_q) =q^2 +q + 2/2 +1=a_q+1,where v_qis adjacent to v_tand to u_1,…, u_a_q. Figure <ref> shows the case when t=3 and j=3 i.e. n=3+12 +3=9. We now apply Theorem <ref> to G.Then(G)= t +12+j-1- (t-1)^2+t-1+2/2+1=t+j-3 ≥ t. Hence, weapply Theorem <ref> by deleting v_t to give a new graph G_1 on n-1 verticesin which (v_i), i=1 … t-1, as well as all the degrees of vertices in B adjacent to v_t are reduced by 1, and hence (G_1)=(v_t-1)-(v_t-2)=t+j-1 ≥ t-1Therefore f(G) ≥ 1+(G_1) ≥ t and we again apply Theorem <ref> to delete v_t-1.The degrees of v_t-2… v_1 now remain unchanged, and for i=2 … t-1, (v_t-i)-(v_t-i-1)=t-i, and the vertices are not adjacent to each other.Hence it follows that, at each step, (G_i)=t-i,which, by Theorem <ref>, implies that f(G)=min{(G_j)+j: j=0,…,n-2}=t.Let us now look at the case when |G| = n = t +2 2+2. * If Δ(G)≤t +2 2. Then by Theorem <ref>, f(G) ≤ t and we are done.* SoΔ = t +2 2+1.Let v_1and v_2be such that (v_1) = Δand v_2 has the second largest degree. Observe that v_1 is adjacent to all vertices of G. Now if(v_2)≥t +1 2 +2, then (G) ≤ t and again we are done by Theorem <ref>.So (v_2) ≤t +1 2+1.We delete v_1 to get the graph G_1.Clearly Δ(G_1) = (v_2)- 1≤t +1 2and by the Theorem <ref>, f(G_1)≤ t - 1hence f(G) ≤ t.For, sharpness we can take the graph Gconstructed above on n vertices for n =t +2 2+1 and add an isolated vertex.Now for a graph G on n vertices with 4 ≤ n ≤t +2 2 +2,we know g(G) ≤ t, hence we gett^2 +3t- 2n+ 6 ≥ 0, and solving the quadratic givesf(G)= t ≤⌈ -3+√( 8n -15 )/2⌉.§ TREES AND FORESTSWe have determinedthe maximum possible value for f(G)with respect to Δ(G)(Theorem <ref>) and with respect to|G| = n. (Theorem <ref>). We propose the problem of finding max{f(G) : G }and conjecturethe following : IfFis a forest on n vertices, where n ≤t^3 + 6t^2 +17t +12/6then f(F) ≤ tand this is sharp.The following construction shows that if the conjecture is true then the upper bound isbest possible.Consider the sequence a_ j =j +12+1.For t ≥ 0 we define a tree T_t on b_t verticesas follows:Let P_2t +3 be a path on 2t +3 vertices. Now to the vertexv_2j forj = 1,…,t+1of the path we add exactlyx_ j= a_ j - 2= j +12 - 1 leaves,so that x_1 = 0,x_2 = 2 and so on. Clearly (v_1) = (v_2t+3) = 1,and for t ≥ 1,(v_2j +1 ) = 2 for j =1,…,twhile (v_2j ) =a_ j for j = 1,…,t+1.Now for t = 0we get T_0=K_1,2 with b_t=3,for t = 1we get b_t=7 and for t=2, b_t=14, as shown in Figure <ref>.The number of vertices in T_t isb_t=t^3+6t^2+17t+18/6.We prove this by induction on t.For t=0, b_0=3 and for t=1, b_1=7 as required.So let us assume it is true for b_k-1.Thenb_k=b_k-1+k^2+3k+4/2 =(k-1)^3+6(k-1)^2+17(k-1)+18/6+k^2+3k+4/2=k^3+6k^2+17k+18/6as required.So (T_t)= t^2+3t+4/2 - (t-1)^2+3(t-1)+4/2=t+1.We can, again by induction on t, show that f(T_t)=t+1, for t ≥ 1.Clearlyf(T_0)=1.Suppose that f(T_t) ≤ t < (T_t).Then we should remove the vertex of degree Δ in order to obtain a minimal 2-equating set.But this leaves isolated vertices and the tree T_t-1.But f(T_t-1)=t, by induction, and hence f(T_t)=t+1, a contradiction. Let F be a forest on 13 vertices.Then f(F) ≤2.Let F be a forest on 13 vertices with degree sequence d_1 ≥ d_2 ≥ d_3≥…≥ d_13, and let u, v and w be vertices of degree d_1, d_2and d_3 respectively. We observe the following facts :* We may assume that d_2 ≥ 4,for otherwise we delete uand let F^* = F-u. Then Δ(F^*) ≤ 3 and by Theorem <ref>,f(F^*) ≤ 1hence f(F) ≤ 2.* Therefore d_1 - d_2 ≥ 3,for otherwise f(F) ≤(F) ≤ 2.* Hence we may assume d_2≥ 4 and d_1 ≥7.* d_3≤ 3 — otherwise sinced_1 ≥ 7, d_2 ,d_3 ≥ 4,we get (even in the worst case where u, v, winduce a path of three vertices in some order) |F| ≥ 14.* If u and v are not in the same component then from |F| = 13 we get F = K_1,4∪K_1,7, and f(F)= 2,by deleting the centers of the stars. We now use the notation S_a,b to denotethe double star with adjacent centres of degrees a and b.* If d_2 ≥ 5 and d_1 ≥ 8then |F| = 13 if and only ifF = S_5,8and f(S_5,8) = 2.So we assume d_2 = 4and u and v are in the same component.* If u and v are adjacent, let F^* = F-u. Then Δ(F^*) ≤ 3 and by Theorem <ref> f(F^*) ≤ 1 hence f(F) ≤ 2.Therefore,F can be one of the followinggraphs:* S_4,8 with the edge between the centres subdivided.Clearly f(F) = 2.* S_4,7with the edge between the centres subdivided twice.Clearly f(F) = 2.* S_4,7 with the edge between the centres subdivided and another vertex addedadjacent to a leaf of the vertex of degree 7 or of degree 4.In both two cases f(F) = 2.* S_4,7 with the edge between the centres subdivided by a vertex w and to the vertexw we attach a leaf so that ( w)= 3. Clearly f(F) = 2.In all these cases we only need to delete vertices u and v to get at least two vertices of maximum degree, and hence f(F) ≤ 2. Observe that if |F| < 13 we may add 0-vertices to get a forest on 13 vertices and the same argument applies, hence for |F|≤ 13,f(F)≤ 2..For n=14 we have the graph T_2 (Figure <ref>) which has exactly 2^2+6(2^2)+17(2)+18/2=14 vertices and we know that f(T_2)=3.We now prove the following result: Let F be a forest on n vertices and k ≥ 2 an integer.Suppose n^1/3≥ 2k-1. Thenf_k(F) ≤ (2k-1) ⌊ n^1/3⌋. We first prove the following lemmas:For every k≥ 2 and every graph G with maximum degree Δ, f_k(G) ≤ (k-1)Δ. By induction on Δ.If Δ=0, then the result is trivial since either |G| <k or there are k vertices of degree 0.So suppose the result holds for Δ=r and let G have Δ=r+1.If there are k vertices of maximum degree r+1 we are done.Otherwise, remove all the vertices of maximum degree Δin G —there are at most k-1 such vertices.The resulting graph H has maximum degree r and hence by the induction hypothesis, f_k(H) ≤ (k-1)r.Hencef_k(G) ≤ (k-1)r+k-1 = (k-1)(r+1)=(k-1)Δ(G)as required. Let G be a forest and let A be any subset of k ≥ 2 vertices of G.Define M(A) to be the set of vertices of V(G)\ A each having at least two neighbours in A.Then |M(A)|<|A|=k.Suppose |M(A)| ≥ |A|=k.Let B be any subset of M(A) of cardinality k and let H be the bipartite graph with vertices A ∪ B and only those edges connecting vertices in A to vertices in B.Since each vertex of B has degree at least 2 in H, |E(H)| ≥ 2k.But |V(H)|=2k, therefore H has a cycle contradicting the fact that G is a forest. We now prove Theorem <ref>.The degree of F can range from 0 to n-1.Let us divide this range into subintervalsS_j=[jn^1/3, (j+1)n^1/3 )j=0,…,⌊ n^1/3⌋-1with the last two intervals being S_⌊ n^1/3⌋=[ ⌊ n^1/3⌋ n^1/3, n^2/3) and S_L=[⌈ n^2/3⌉ ,n).Let us denote by A_j or A_L the set of vertices of F whose degrees fall in the intervals S_j or S_L respectively.We first claim that A_L contains at most ⌊ n^1/3⌋ vertices.Suppose not and let x_j be the number of vertices of F having degree j.Consider the forest F^* on n^* ≤ n vertices obtained by deleting all isolated vertices in F.Then clearly the number of vertices x_j of degree j ≥ 1 in F^* in the same as in F, and we have x_1+…+x_n-1=n^* and x_1+2x_2+…+(n-1)x_n-1≤ 2n^*-2.Multiplying the first equation by 2 and subtracting the second gives:x_1-∑_j=3^n-1(j-2)x_j ≥ 2. Hencex_1 ≥ 2 + ∑_j ≥ 3(j-2)x_j. In particularx_1 ≥ 2+(n^2/3-2)|A_L≥ 2(n^2/3-2)(n^1/3+1)=n+n^2/3-2n^1/3. Butn^* ≥ x_1+|A_L| ≥ n+n^2/3-2n^1/3+n^1/3+1=n+n^2/3-n^1/3+1>n^*a contradiction.We now proceed as follows.We remove from F the vertices in A_L and redistribute the resulting degrees among the intervals S_j, j=0,…,⌊ n^1/3⌋, recalculatingA_ j for j=0,…,⌊ n^1/3⌋.If there are at least k vertices with degrees in the last interval we stop.Otherwise we remove these vertices and again we redistribute the degrees among the intervals S_j, j=0,…, ⌊ n^1/3⌋-1, recalculatingA_ j for j=0,…,⌊ n^1/3⌋-1.This process continues until we reach one of the following possibilites: * We have deleted all vertices and we are left with only those vertices in A_0;* For some j ≥ 1, A_j contains at least k vertices. We consider these cases separately: * In this case we have deleted ⌊ n^1/3⌋ vertices from A_L and at most (k-1)⌊ n^1/3⌋ further vertices by deleting at most (k-1) vertices that were in the respective sets A_1,…,A_⌊ n^1/3⌋ at each stage.So altogether k⌊ n^1/3⌋ vertices have been deleted.But now the resulting graph has maximum degree at most ⌊ n^1/3⌋ and therefore, by Lemma <ref>, by deleting at most a further (k-1)⌊ n^1/3⌋ vertices we arrive at a graph with k vertices of maximum degree (or at most k-1 vertices at all).To do this we have altogether deleted at most (2k-1)⌊ n^1/3⌋ vertices, as required.* We have stopped the deletion process when A_j, j ≥ 1, contains at least k vertices, A_j being the set of vertices of the reduced forest having degrees in S_j=[jn^1/3,(j+1)n^1/3).Let v_1,v_2,…,v_k be the k vertices in A_j of largest degrees, say d_1 ≥ d_2≥…≥ d_k.Let us call this set of vertices A. By Lemma <ref>, |M(A)|<k where we recall that M(A) is the set of vertices adjacent to at least two vertices of A.Since a vertex v ∈ A can be adjacent to at most k-1 other vertices in A and k vertices in M(A), there are at least (v)-2k+1 vertices that are neighbours of v but which are not in A ∪ M(A).Since such vertices are adjacent to at most one vertex from A, these (v)-2k+1 vertices are only adjacent to v ∈ A and not to any other vertex in A.Let B(v) be the set of these neighbours of v.Now consider any vertex v_i ∈ A, i=1 … k.Suppose (v_i)=(v_k)+t_i.Then|B(v_i)| ≥(v_i) - 2k+1 ≥(v_k)+t_i-2k+1 ≥ n^1/3+t_i-2k+1 ≥ t_isince n^1/3≥ 2k-1.We therefore need to remove t_i vertices of B(v_i) (and this will not change the degree of any other vertex in A) in order to equate (v_i) and (v_k).However, since |B(v_i)| ≥ t_i, we can do this.Hence, equating all the degrees of the vertices v_1,…,v_k-1 to (v_k) can be done at the cost ofdeletingat most a further (k-1)⌊ n^1/3⌋ vertices.This means that we have deleted altogether at most (2k-1)⌊ n^1/3⌋ vertices, so we are done.Remark: The above proof also works for the more general class of graphs without even cycles.Lemma <ref> remains unchanged since the graph H used in the proof is bipartite by construction.A graph G on n vertices and without even cycles contains at most 3n/2 edges <cit.>.Therefore, in the proof of the Theorem, instead of the computation involving x_1 we compute an upperbound on the number of vertices in A_J, by noting that if this number is at least 3n^1/3+1 then3n ≥ 2|E(G)| ≥∑_v ∈ A_J≥ (3n^1/3+1)(n^2/3) = 3n + n^2/3>3n,a contradiction.We then remove the vertices of A_J and redistribute the resulting degrees, sacrificing at most 3n^1/3 vertices, and continue as in the proof.This givesf_k(G) ≤ (2k+1)n^1/3,giving a weaker bound for a more general class of graphs.§THE FUNCTIONS G(Δ,K) AND H(Δ,K)Lemma <ref>, which states that g(Δ,k) ≤ ( k-1)Δ, plays a crucial rule in the proofof Theorem <ref>.Another motivation to study g(Δ,k) =max{ f_k(G): Δ(G)≤Δ} comes from the Proposition <ref> below,which gives a weak support for the conjecture f_k(G) =< f(k) √(|G|) mentioned in the introduction,and also demonstrates that for graphs with e(G) = o(n^2),f_k(G) = o(n)(where e(G)is the number of edges of G) .Observe that it has not yet been proved in generalthat for fixed k and G a graph on n vertices,f_k(G) = o(n). Suppose G is a graph on n vertices and e(G) ≤ cn^1+β where 0 ≤β < 1 and let α= 1+ β/2. Then f_k(G)≤(k - 1 +2c)n^α,and in particular for β= 0,f_k(G) ≤ (k-1+2c) √(n).Define V_α = {v:(v)≥n^α} and suppose |V_α)|> 2cn^α. Then2e(G)= ∑{(v) : v ∈V(G)}≥∑{(v): v ∈ V_α} > n^α2cn^α = 2cn^2α =2cn^1+β≥ 2e(G), a contradiction. Hence |V_α| ≤ 2cn^α.Delete V_α we get a graph H with Δ(H) ≤ n^α.Hence applying Lemma <ref> we getf_k(G) ≤ |V_α| +(k-1)Δ(H) ≤ 2cn^α + (k-1)n^α= (k -1 +2c) n^α.So a better knowledge of the behavior of g(Δ,k) willhelp to obtain better bound on f_k(G) as well as f(n,k) =max{ f_k(G) : |G|= n }.For every Δ≥ 0 and k ≥ 2,* g(0,k) = 0.* g(1,k) = ⌊k-1/2 ⌋.* For Δ≥ 1,g(Δ,2)=⌈-3+√(8Δ+1)/2⌉. * Clearly if G is a graph with maximum degree Δ =0then either |G| ≥ k and we are done, or else |G| ≤ k-1 and we are done by the definition of f_k(G),hence g(0,k) = 0.* Consider Gwith maximum degree Δ(G) = 1.If there are already k vertices of degree 1 we are done.So assume there are at most k-1 vertices of degree 1. Byparity these vertices form exactly ⌊ k-1/2 ⌋isolated edges containing exactly 2 ⌊k-1/2 ⌋ vertices of degree 1.We delete from each isolated edge one vertex of degree 1to get an induced subgraph H with Δ(H) = 0.It follows, since g(0,k)=0, that f_k(G) ≤⌊ k-1/2 ⌋. This bound is sharp as demonstrated by the graph tK_2∪ mK_1where m ≥ 0and t =⌊ k-1/2 ⌋, k ≥ 3. * This is a restatement of Theorem <ref>.Determining g(2,k)requires more efforts, in particular we will use Ore's observation that if G is a graph on n verticeswithout isolated vertices, then the domination number of G, denoted γ(G), satisfiesγ(G) ≤⌊n/2 ⌋ <cit.>.For k ≥ 2,g(2,k) = k-1.The graph G =(k-1) K_1,2(k-1vertex-disjointcopies of the star K_1,2)has f_k(G) = k-1 as is easily checked.So g(2,k)≥ k-1. Let us prove the converse. Consider a graph G with Δ(G) = 2,otherwise by Proposition<ref>(part 2) we are done. Let n_2=|{v : (v) =2} |.Clearly if n_2≥ k we are done so we may assume 1 ≤ n_2 ≤ k-1. We collectthe (possible)components of G into threesubgraphs:A ={ all isolated verticesand isolated edges}, B = {all copies of K_1,2 },C= {all other components}. We denote by t the number of copies of K_1,2in Band we observe thatt ≤ n_2 and that in each component in C the vertices of degree 2 induce either a path(including a single edge)or a cycle. We claim that if t > ⌊k-1/2 ⌋ we are done by deleting all n_2-t vertices of degree 2 in C and from each copy of K_1,2 in Bwe delete a leafto getfrom G an induced subgraph H with Δ(H) = 1 and with at least 2 (⌊k-1/2 ⌋ + 1)≥k vertices of degree 1,and we have deleted altogether n_2 =< k-1vertices. So we shall assumet ≤⌊k-1/2 ⌋. Consider the subgraph Finduced by the vertices of degree 2 in C.Case1:|F| = 0. If|F| = 0(namely C is empty)then n_2 = t≤⌊k-1/2 ⌋. Delete a leaf from each copy of K_1,2 in B.We get fromG a graph H withΔ(H) = 1 (as in A all components have maximum degree at most 1). If in H there are already k vertices of degree 1, we are done as we have deletedt ≤⌊k-1/2 ⌋ vertices. Otherwise by Proposition<ref> (part 2), f_k(H) ≤⌊k-1/2 ⌋and hence f_k(G)≤ 2 ⌊k-1/2 ⌋≤ k-1. Case 2:|F| > 0.Then as we have noted before, due to the components of C,there are no isolated vertices in F,and byOre's result γ(F) ≤⌊n_2 - t /2 ⌋≤⌊k- 1 - t/2 ⌋. Let D be a dominating set for F that realises γ(F), hence |D| ≤n_2 - t/2. Delete D and consider the induced subgraph H onA ∪ C.Clearly Δ(H) ≤ 1 and denote by n_1 the number of vertices of degree 1 in H. Now we look again at B.Case 1: t= 0. Since t = 0,B is empty, and either n_1≥ k and we are done as we have deleted |D| =≤⌊n_2/2⌋≤⌊k-1/2 ⌋ verticesorby Proposition<ref> (part 2),f_k(G) ≤ f_k( H)+ |D| ≤2 ⌊k-1/2 ⌋≤ k-1. Case2:1 ≤ t ≤⌊k-1/2 ⌋. * if n_1 ≥k - 2t then deleting a leaffrom every copy of K_1,2in B,we getan induced graphH^* onA ∪ B ∪ C(extending H tothe leftover of B) with Δ(H^*) = 1 and at least k- 2t +2t = k vertices of degree 1 and we are done as we have deleted altogether|D| + t≤n_2 - t /2 +t=n_2 +t/2≤ n_2 ≤ k-1vertices.* If n_1 ≤ k - 1 -2t(recalln_1 is the number of vertices of degree 1 in H formed from A∪{C \ D}), then we delete n_1/2 independentvertices of degree 1 inH,and t vertices of degree 2 in B to get an induced subgraph H^* with Δ(H^*) = 0. But g(0,k) = 0hence f_k(H^*) = 0andf_k(G)≤n_1/2 + |D| + t ≤k-1-2t/2 + k-1-t/2 + t = 2k - 2 - t/2≤ k-1and the proof is complete.The following construction supplies a lower bound for g(Δ,k) in terms of g(Δ,2). For even k ≥ 2, g(Δ,k) ≥ g(Δ,2) k/2 + k/2-1. Recall the sequence a_t = t + 12+1,which for t ≥ 1 gives the smallest maximum degreefor which there is a graph G with f(G)=t. Such a graph is ⋃ K_1,a_j for j = 1,…,tand incase we have a_t ≤Δ < a_t+1,G = K_1, Δ∪ K_1,a_ j: j = 1,…,t-1. Now we takek-1 copies ofK_1,a_t and k/2 copies of K_1,a_j: j = 1,…,t-1.In case a_t ≤Δ < a_t+1 we take k-1 copies of K_1,Δ and k/2copies of K_1,a_j:j = 1,…, t-1. Note that for k = 2 this is exactly the sequence that realises Theorem <ref>. Observe now that we cannot equateto degree Δ as there are just k-1 such degrees.So we can equate to the second largest degree a_t-1 by deleting exactly Δ - a_t-1leaves from k/2 vertices of the maximum degree and k -1 - k/2 other centres.Altogether we deleted(Δ - a_t)k/2+ k/2-1 ≥g(Δ,2)k/2+ k/2- 1.In case Δ = a_t we have deleted exactly g(Δ,2)k/2+ k/2-1. We can now equate to some value x such thata_t-1> x≥a_t-2+j, j ≥ 1.However clearly this requires the deletion of more vertices then just to equate to a_t-1and in particular the deletion of at leastg(Δ,2) k/2 + k/2- 1vertices. Now we can try to equate to a_t-2. The cheapest way is to delete the k-1 vertices of degree Δ and a_t-1 -a_t-2leaves from each of the k/2 vertices of degree a_t-1.So altogether we deleted k - 1 + (a_t-1- a_t-2)k/2= k-1 +(g(Δ,2) - 1)k/2= g(Δ,2)k/2+ k/2 - 1vertices.Again we can now try to equate to some value x such thata_t-2> x≥a_t-3+j, j ≥ 1..However Clearly this requires the deletion of more vertices then just to equate to a_t-2 and in particular the deletion of at leastg(Δ,2) k/2 + k/2- 1vertices. So this deletion process continues and we always forced to delete at leastg(Δ,2) k/2 + k/2- 1vertices, even if we delete all the centres of the stars to get an induced subgraph with all degrees equal 0.Hence for even k ≥ 2we get g(Δ,k) ≥ g(Δ,2)k/2 + k/2-1 (which is sharp for k=2). While slight improvements on this lower bound are possible forodd k ≥ 3,our goal in this construction is only to demonstrate a linear lower boundon g(Δ,k)in terms of g(Δ,2) and k for which the construction suffices. We now turn our attention to h(Δ,k).Recall thatfor k ≥ 2, a graph Gis k-feasible if it contains an induced subgraph H (possibly also H = G) such that in H there are at least k vertices that realiseΔ(H), and we defineh(Δ,k)= max{ |G| : Δ(G) ≤Δ}. For every Δ≥ 0 and k ≥ 2, * h(Δ,k)≤ R(k,k)- 1. * h(0,k) = k-1.* h(1,k) =⌊k/2⌋ + 2 ⌊k-1/2⌋.* For odd k ≥ 3, h(2,k) = 2k-2, and for even k ≥ 2,h(2,k) = 2k-3.* h(Δ,k)≤ g(Δ,k)+ k-1. * Clearly if |G| ≥ R(k,k)then G hasa vertex-set A,|A| ≥ ksuch that the induced subgraph on Aiseither a clique or an independent set . Hence deleting V - Awe are left with a regular graph on at least k vertices hence G is k-feasible and h(Δ,k) ≤ R(k,k)- 1.* h(0,k) = k-1 is trivially realised by ( k-1) K_1 i.e. k-1 isolated vertices.* A lower bound for h(1,k)is h(1,k)≥⌊k/2⌋ + 2 ⌊k-1/2⌋realised by the graph G =⌊k/2⌋ K_1∪⌊k-1/2⌋ K_2which is trivially seen to be non-k-feasible. Next suppose G is a graph having⌊k/2⌋ + 2 ⌊k-1/2⌋+ jvertices, j ≥ 1. Write⌊k/2⌋ + 2 ⌊k-1/2⌋+ j= x+2y where x denotes the number of 0-verticesand 2y the number of 1-vertices in G.Now if y >⌊k-1/2⌋ we have at least k vertices of degree 1 and we are done.If0 ≤ y ≤⌊k-1/2⌋ then delete y 1-vertices, one of eachcopy of K_2,and we are left with at least⌊k/2⌋ +⌊k-1/2⌋ +j ≥⌊k/2⌋ + ⌊k-1/2⌋+1= k vertices of degree0. Hence G is k-feasible, andh(1,k) =⌊k/2⌋ + 2 ⌊k-1/2⌋.* Clearly h(2,k)≤ 2k - 2since ifG has Δ = 2and at least 2k- 1 verticesthen by deleting at most k- 1 =g(2,k) vertices we cannot get below k so there must be induced Hwith at least kvertices realizing the maximum degree.Suppose kis oddand k ≥ 3.Consider the graph G = k-1/2P_4(k-1/2 copies of the path on four vertices P_ 4).Clearly |G| = 2k-2 having exactly k-1 2-vertices and k-11-vertices. Observe thatif G is k-feasible then in at least one of the P_4we should be able to delete just one vertex to get the remaining three vertices of the same degree, otherwise if in eachcopy of P_4 (or what remains of it after deleting some vertices) we willhave at most two vertices of the same degree then over all G we will have at most k-1 vertices of the same degree,meaning Gis not k-feasible. However it is impossible to delete one vertex from P_4 to get all the remaining three vertices of the same degree hence G is not k-feasible proving h(2,k) = 2k-2 for odd k ≥ 3. Suppose k is even,k ≥ 2. The case k = 2 is trivial hence we assumek ≥ 4. Consider the graph G=k-1/2P_4∪K_1.Clearly |G| = 4k-2/2 +1 = 2k-3 having exactlyk-21-vertices,k-22-vertices and one0-vertex. If G was k-feasible then by deleting the 0-vertex v, H = G-vwould be at leastk-1-feasible with oddt = k -1 ≥ 3. But H is exactly the graph which was proved above to be non t-feasible for odd t ≥ 3,soG is not k-feasible, proving h(2,k)≥ 2k-3 for even k ≥ 4. We have toshow that if |G| = 2k-2 and Δ(G)=2,then for even k ≥ 4, G is k-feasible,this will complete the proof that for even k ≥ 2,h(2,k) = 2k-3. Suppose on the contrary that|G| = 2k-2and Δ(G) = 2 but G is not k-feasible.Let n_j, j = 0,1,2be the number of vertices of degree j = 0,1,2respectively in G. Since G is non-k-feasibleand by the value of h(1,k)we may assume 1 ≤ n_2≤k-1. However2k- 2 > h(2,k-1) = 2(k-1) -2 = 2k-4. Hence G is k-1-feasible. So either n_2 = k-1or else, by removingat mostk -2vertices, we getan induced subgraph H,|H| > = k with at least k-1 vertices realising the maximum degree of H. If Δ( H) = 1then, since k - 1 is odd, it forces that there areat least k 1-verticesbut then G is k-feasible.Otherwise Δ(H) = 0 but |H| ≥ k and again G is k-feasible. So only the case n_2 = k-1is left.Since n=2k-2 and n_2=k-1 then by parity n_1 ≤ k-2and n_0 ≥ 1. We collectthe (possible)components of Ginto threesubgraphs:A = {all isolated verticesand isolated edges},B = {all copies of K_1,2},C={all other components}. We denote by t the number of copies of K_1,2in Band also observe that t < n_2 = k-1 since otherwise |G| = 3k-3 > 2k-2=|G|a contradiction since k ≥ 2. Also observethat in each component in C the vertices of degree 2 induced either ona path (including a single edge)or a cycle. Claim: If t > ⌊k-1/2⌋ we are done. This is because F is not empty since t < n_2,and by the observation above δ(F) ≥ 1hence by Ore's resultthe domination number of F satisfies γ(F) ≤⌊n_2 - t /2 ⌋≤⌊k-1-t /2 ⌋. Let D be a minimum dominating set for F. Deleting Dfrom Cand from each copyof K_1,2in B we delete a leafto getan induced subgraph H with Δ(H) = 1 and with at least 2 (⌊k-1/2 ⌋ +1) ≥ k vertices of degree 1, meaning G is k-feasible.Observe we have deleted at mostt + ⌊n_2 - t /2 ⌋≤n_2+t /2 ≤⌊2n_2 -1 /2 ⌋ = ⌊2k-3/2⌋ = k-2vertices, proving the claim. Consider the subgraph F induced by the vertices of degree 2 in C and recall |F| > 0,hence |F| ≥ 2.Then as we have noted before, due to the components of C,there is no isolated vertices in F,and byOre's result γ(F) ≤⌊n_2 - t /2 ⌋≤⌊k -1 - t/2 ⌋. Let D be a dominating set for D that realises γ(F), hence |D|≤ n_2 - t/2. Delete D and consider the induced subgraph H onA ∪ C. Clearly Δ(H) ≤ 1 and denote by x(1) the number of vertices of degree 1 in H. Now let us look again at B. Case1:t= 0. Since t = 0, B is empty, and we have deleted |D| ≤⌊n_2/2⌋≤⌊ k-1/2 ⌋ = k-2/2 vertices since k is even.So the number of vertices remains isat least 2k-2- k-2/2=3k-2/2. But as k is even, h(1,k)= ⌊k/2 ⌋ + 2⌊k-1/2⌋=k/2 + 2(k-2)/2=3k - 4/2< 3k-2/2 hence H is k-feasibleand so G is k-feasible. Case2:1 ≤t ≤⌊k-1/2 ⌋.We consider two cases: * if x(1) ≥k - 2tthen deleting a leaffrom every copy of K_1,2 in Bwe getan induced graphH^* onA ∪ B ∪ C(extending H tothe leftover of B) with Δ(H^*) = 1 and at least k- 2t +2t = k vertices of degree 1 and we are done as we have deleted altogether |D| + t ≤n_2 - t /2 +t= n_2 +t/2≤ k-2(as before).Hence G is k-feasible.* if x(1) ≤ k - 1 -2t(recallx(1)is the number of vertices of degree 1 in H formed from A∪{ C \D }), then by the even parity of x(1) and as k is even we must have x(1) ≤ k-2-2t. Nowdelete x(1)/2 independentvertices of degree 1 inH,and t vertices of degree 2 in B to get an induced subgraph H^*with Δ(H^*) = 0. We have removed x(1)/2 + |D| + t ≤k-2-2t/2 + k-1-t/2 + t = 2k - 3 -t /2≤2k-4/2 = k-2vertices (since t ≥ 1),hence |H^*| ≥ k and we have k vertices of degree 0realizing Δ(H^*). Hence H^* is k-feasible and so does G,completing the proof.*Suppose |G|= g(Δ,k) +k andΔ(G) = Δ.Then by the definition of g(Δ,k), by deleting at most g(Δ,k) vertices we either get below k vertices or have an induced subgraph H with at least k vertices realizing the maximum degree of H. But deleting g(Δ,k)vertices from G will leave us with a graph on at least kvertices hencethe second possibility above holds and G is k-feasible, and we conclude that h(Δ,k)≤g(Δ,k) + k-1.§ OPEN PROBLEMSWe conclude byproposing the following open problems: * Certainly the most intriguing problem is to solve the Caro-Yuster conjecture that f(n,k) ≤ f(k) √(n).As mentioned we proved that f(2) = √(2)is sharp and best possible, and it is known that f(3)≤ 43. For k ≥ 4 the conjecture remains open.Even a proof that f(n,k) =o(n) is of interest.* Theorem <ref> supplies an O(n^2)algorithm to compute f(G). Can f_3(G)be computed in polynomial time?* We have calculated, in section 4, the exact values ofg(Δ,k) for Δ= 0,1,2, and we have given a generalconstructive lower bound for g(Δ,k).Determining g(3,k)seems a considerably more involved task, as well as proving a conjecture inspired by the Caro-Yuster conjecture namely: For k ≥ 2 there is a constant g(k)such that g(Δ,k)≤ g(k) √(Δ). This conjecture, if true,implies the Caro-Yuster conjecture.* We introduced the notion of a k-feasible graph and the corresponding function h(Δ,k) discussed in Section 4.We have determined the exact values of h(Δ,k)for Δ = 0,1,2.We pose the problem to determine more exact values of h(Δ,k)in particular for Δ = 3 as well as to determine h(k) = max{ h(Δ,k) : Δ≥ 0}.Clearly as already proved in section 4,h(k)≤ R(k,k) -1.* Lastly we mention again the conjecture about forests: IfFis a forest on n vertices, where n ≤t^3 + 6t^2 +17t +12/6then f(F) ≤ tand this bound is sharp.plain
http://arxiv.org/abs/1704.08472v1
{ "authors": [ "Yair Caro", "Josef Lauri", "Christina Zarb" ], "categories": [ "math.CO", "05C07" ], "primary_category": "math.CO", "published": "20170427081850", "title": "Equating two maximum degrees" }
A Rex Solver A Rex Solver Dept. of Computing Science, UAlberta, Canada, [email protected], <http://webdocs.cs.ualberta.ca/ hayward/>A Reverse Hex Solver Kenny Young The authors gratefully acknowledge the support of NSERC. Ryan B. Hayward Accepted 2018 February 28. Received 2018 February 23; in original form 2017 April 26 ======================================================================================== We present Solrex, an automated solver for the game of Reverse Hex. Reverse Hex, also known as Rex, or Misère Hex, is the variant of the game of Hex in which the player who joins her two sides loses the game. Solrex performs a mini-max search of the state space using Scalable Parallel Depth First Proof Number Search,enhanced by the pruning of inferior moves andthe early detection of certain winning strategies.Solrex is implemented on the same code base as the Hex program Solver, and can solve arbitrary positions on board sizes up to 6×6, with the hardest position taking less than four hours on four threads.§ INTRODUCTION In 1942 Piet Hein invented the two-player board game now called Hex <cit.>. The board is covered with a four-sided array of hexagonal cells. Each player is assigned two opposite sides of the board. Players move in alternating turns. For each turn, a player places one of their stones on an empty cell. Whoever connects their two sides with a path of their stones is the winner.In his 1957 Scientific American Mathematical Games column,Concerning the game of Hex, which may be played on the tiles of the bathroom floor, Martin Gardner mentions the misère version of Hex known as Reverse Hex, or Rex, or Misére Hex: whoever joins their two sides loses <cit.>. See Figure <ref>.So, for positive integers n, who wins Rex on n×n boards? Using a strategy-stealing argument, Robert O. Winder showed that the first (resp. second) wins when n is even (odd) <cit.>. Lagarias and Sleator further showed that, for all n, each player has a strategy that can avoid defeat until the board is completely covered <cit.>.Which opening (i.e. first) moves wins?Ronald J. Evans showed that for n even, opening in an acute corner wins <cit.>. Hayward et al. further showed that, for n even and at least 4, opening in a cell that touches an acute corner cell and one's own side also wins <cit.>.The results mentioned so far prove the existence of winning strategies. But how hard is it to find such strategies? In his 1988 book Gardner commented that “4×4 [Rex] is so complex that a winning line of play for the first player remains unknown. <cit.>. In 2012, based on easily detected pairing strategies, Hayward et al. explained how to find winning strategies for all but one (up to symmetry)opening move on the 4×4 board <cit.>.In this paper, we present Solrex, an automated Rex solver that solves arbitrary Rex positions on boards up to 6×6. With four threads, solving the hardest 6×6 opening takes under 4 hours; solving all 18 (up to symmetry) 6×6 openings takes about 7 hours.The design of Solrex is similar to the design of the Hex program Solver. So, Solrex searches the minimax space of gamestates using Scalable Parallel Depth-First Proof Number Search, the enhanced parallel version by Pawlewicz and Hayward <cit.> of Focussed Depth-First Proof Number Search of Arneson, Hayward, and Henderson <cit.>. Like Solver, Solrex enhances the search by inferior move pruning and early win detection. The inferior move pruning is based on Rex-specific theorems. The win detection is based on Rex-specific virtual connections based on pairing strategies.In the next sections we explain pairing strategies, inferior cell analysis, win detection, the details of Solrex, and then present experimental results.§ DEATH, PAIRING, CAPTURE, JOININGRoughly, a dead cell is a cell that is useless to both players, as it cannot contribute tojoining either player's two sides. Dead cells can be pruned from the Rex search tree. Related to dead cells are captured cells, roughly cells that are useless to just one playerand so can be colored for the other player. In Hex, each player wants to capture cells; in Rex, each player wants to force the opponent to capture cells. In Rex, such opponent-forced capture can be brought about by pairing strategies. As we will see in a later section, pairing strategies can also be used to force the opponent to join their two sides.Before elaborating on these ideas, we give some basic terminology. Let X denote the opponent of player X.For a given position, player X colors cell c means that player Xmoves to cell c, i.e. places a stone of her color on cell c. A cell is uncolored if it is unoccupied. To X-fill a set of cells is to X-color each cell in the set; to fill a set is either to X-fill or X-fill the set.A state S=P^X is a position Ptogether with the specified player X to move next. The winner of S is whoever has a winning strategy from S.For a position P and a player X, a X-joinset is a minimal set of uncolored cells which when X-colored joins X's two sides; a joinset is an X-joinset or an X-joinset; an uncolored cell is live if it is in a joinset, otherwise it is dead; a colored cell is dead if uncoloring it would make it dead.For an even size subset C of uncolored cells of a position or associated state, a pairing Π is a partition of Cinto pairs, i.e. subsets of size two. For a cell c in a pair {c,d}, cell d is c's mate. For a state S, a player Y, and a pairing Π, a pairing strategy is a strategy for Y that guarantees that, in each terminal position reachable from S, at most one cell of each pair of Π will be Y-colored.For a state S=P^X, Last is that player who plays last if the game ends with all cells colored, and Notlast is the other player, i.e. she who plays second-last if the game ends with all cells uncolored. So, Last (Notlast) is whoever plays next if and only if the number of uncolored cells is odd (even). For example, for S=P^X with P the empty 6×6 board, Last is X and Notlast is X, since X plays next and P has 36 uncolored cells. For state S and pairing Π, each player has a pairing strategy for S.It suffices to follow these rules. Proving that this is always possible is left to the reader.First assume Y is Last. In response to Y coloring a cell in Π, Y colors the mate. Otherwise, Y colors some uncolored cell not in Π. Next assume Y is Notlast. In response to Y coloring a cell in Π with uncolored mate, Y colors the mate; otherwise, Y colors a cell not in Π; otherwise (all uncolored cells are in Π, and each pair of Π has both or neither cell colored), Y colors any uncolored cell of Π. For a player X and a pairing Π with cell set C of a position P or associated state S=P^Y, we say Π X-captures C if X-coloring at least one cell of each pair of Π leaves the remaining uncolored cells of C dead; and we say Π X-joins P if X-coloring at least one cell of each pair of Πjoins X's two sides.Notice that every captured set (as defined here, i.e. for Rex) comes from a pairing and so has an even number of cells, as does every X-join set.§ INFERIOR CELL PRUNING AND EARLY WIN DETECTIONWe now present the Rex theorems that allow our solver to prune inferior moves and detect wins early.For a position P, a player X, and a set of cells C, P+C_X is the position obtained from P by X-coloring all cells of C, and P-C is the position obtained from P by uncoloring allcolored cells of C. For clarity, we may also write P-C_X in this case where X is theplayer who originally controlled all the cells of C. Similarly, for a state S=P^Y, where Y=X or X, S+C_X is the state (P+C_X)^Y. Also, in this context, when C has only one cell c, we will sometimes write c_X instead of {c}_X.For states S and T and player X, we write S ≥_X T if X wins T whenever X wins S, and we write S ≡ T if the winner of S is the winner of T, i.e. if S≥_X T and T ≥_X S for either player X.An X-strategy is a strategy for player X. For an even size set C of uncolored cells of a state S, S ≥_X S+C_X.Assume π^+ is a winning X-strategy for S^+ = S+C_X. Let π be the X-strategy for S obtained from π^+ by moving anywhere in C whenever X moves in C. For any terminal position reachable from S,the set of cells occupied by X will be a superset of the cells occupied by X in the corresponding position reachable from S^+, so X wins S.For a position P with uncolored cell c, (P+c_X)^Y≥_X P^Y.First assume Y =X. Assume X wins S=P^X. Then, for every possible move from S by X, X can win.In particular, X can win after X colors c.So X wins (P+c_X)^X.Next assume Y=X. Assume X wins S=P^X. We want to show X wins S'=(P+c_X)^X. Let c' a cell to which X moves from S', let C={c,c'}, and let S” be the resulting state (P+C_X)^X. X wins S so,by Theorem <ref>, X wins S”. So, for every possible move from S', X wins. So X wins S'.For an X-captured set C of a state S, S+C_X ≥_X S.Assume X wins S^+=S+C_X with strategy π^+.We want to show that X wins S. Let Π be an X-capture pairing for C, and modify π^+ by adding to it the Π pairing strategy for X.Let Z be a terminal state reachable from S by following π. Assume by way of contradiction that Z has an X-colored set of cells joining X's two sides. If such a set Q^* exists, then such a set Q exists in which no cell is in C. (On C X follows a Π pairing, so in Z at most one cell of each pair of Π is X-colored. Now X-color any uncolored cells of C. Now at least one cell of each pair is X-colored, and C is X-captured, so each X-colored cell of C is dead, and these cells can be removed one at a time from Q^* while still leaving a set of cells that joins X's two sides. Thus we have our set Q.) But then the corresponding state Z^+ reachable from S^+ by following π^+ has the same set Q, contradicting the fact that X wins S^+.For an X-captured set C of a state S, S ≡ S+C_X.By Theorem <ref> and Theorem <ref>.For a player X and a position P with uncolored dead cell d, (P+d_X)^X≥_X P^X. A move to a dead cell is at least as good as any other move.Coloring a dead cell is equivalent to opponent-coloring the cell. So this theorem follows by Theorem <ref>.For a position P with uncolored cells c,k with c dead in P+k_X, (P+c_X)^X≥_X (P+k_X)^X. Prefer victim to killer.(P+k_X)^X≡ (P+k_X+c_X)^X ≥_X (P+c_X)^X. For a position P with uncolored cells c,k with c dead in P+k_X, (P+c_X)^X≥_X (P+k_X)^X. Prefer vulnerable to opponent killer.Assume k is a winning move for X from P^X, i.e. assume X wins S=(P+k_X)^X. Consider any such winning strategy π. We want to show c is also a winning move for X from P^X, i.e. that X wins S'=(P+c_X)^X.To obtain a winning X-strategy π' for S', modify π by replacing c with k: whenever X (resp. X) colors c in π,X (X) colors k in π'. In P, X-coloring k kills c: so in P, if some X-joinset J contains c, then J must also contain k. But a continuation of π' has both k and c X-colored if and only if the corresponding continuation of π has them both X-colored. So, since X wins S following π, X wins S' following π'.For a position P with uncolored cell d andset C that is X-captured in (P+d_x)^X, for all c∈ C,(P+c_x)^X≥_X (P+d_x)^X. Prefer capturee to capturer. (P+c_X)^X≥_X(P+C_X+d_X)^X≡ (P+d_X)^X.Our next results concern mutual fillin, namely when there are two cells a,b such that X-coloring aX-captures b and X-coloring bX-capturesa.Let P be a position with sets A,B containing cells a,b respectively, such that A is X-captured in (P+b_X),and B is X-captured in (P+a_X). Then P≡ P+a_X+b_X.By if necessary relabelling {X,a,A} and {X,b,B}, we can assume X plays next. We claim that a X-dominates each cell in A+B. Before proving the claim, observe that it implies the theorem, since after X colors a, all of B is Y-captured, so Y can then color any cell of B, in particular, b.To prove the claim, consider a strategy that X-captures A in P+b_X. Now, for all α in A+B, (P+α_X)^X ≤_X(P+b_X)^X      (Theorem <ref> twice: remove α_X, add b_X)≡(P+A_X+b_X)^X      (capture)≤_X (P+a_X+B_X)^X      (Theorem <ref>, repeatedly for X and then X)≤_X (P+a_X)^X      (capture)So the claim holds, and so the theorem.Let c be any X-colored cell in a position P as described in Theorem <ref>. Then (P-c+a_X)^X̅≥_X P^X̅. Prefer filled to mutual fillin creator.Define b' to be the mate of b in the X-capture strategy for B in (P+a_X).P^X̅ ≡ (P+a_X+b_X̅)^X̅      (Theorem <ref>)≡ (P+a_X+B_X-b'_X̅)^X̅      (filling captured cells, now b' dead)≡ (P+a_X+B_X)^X     (coloring b')≡ (P+a_X)^X      (capture)≤_X (P-c+a_X)^X̅      (Theorem <ref>) Finally, we mention join pairing strategies. For a state S=P^X with an X-join pairing Π, X wins P^X.It suffices for X to follow the Π strategy. In each terminal state Z player X will have colored at most one cell of Π. From Z obtain Z' by X-coloring any uncolored cells: this will not change the winner. But in Z' at least one cell of each pair of Π is X-colored, and Π is an X-join pairing. So in Z' X's two sides are joined, so in Z X's two sides are joined. So X wins. § EARLY WIN DETECTIONFor a position P, a X-join-pairing strategyis a pairing strategy that joins X's two sides, and an X-pre-join-pairing strategy is an uncolored cell k together with an X-join-pairing strategy of P+k_X; here k is the key of this strategy. The key to our algorithm is to find opponent (pre)-join-pairing strategies. When it is clear from context that the strategies join a player's sides, we call these simply (pre)-pairing strategies. Let P be a position with an X-join-pairing strategy. Then X wins P^X and also P^X.This follows from Theorem 7 in <cit.>: X can force X to follow the X-join-pairing strategy.Let P be a position with an X-pre-join-pairing strategy and with X= Last. Then X wins P^X and also P^X.This follows from Theorem 6 in <cit.>.X can avoid playing the key of the pre-pairing strategy,forcing X to eventually play it. § SOLREXSolrex is based on Solhex, theHex solver of the Benzene code repository <cit.>. The challenge in developing Solrex was to identify and remove any Hex-specific, or Rex-unnecessary, aspects of Solhex — e.g. permanently inferior cells apply to Hex but not Rex — and then add any Rex-necessary pieces. E.g., it was necessary to replace the methods for finding Hex virtual connections with methods that find Rex (pre-) pairing strategies.Search follows the Scalable Parallel Depth First variant of Proof Number Search, with the search focusing only ona limited number of children (as ranked by the usual electric resistance model) at one time <cit.>.When reaching a leaf node, using a database of fillin and inferior cell patterns, we apply the theorems of <ref>. We find dead cells by applying local patterns and by searching for any empty cells whose neighbourhood of empty cells, after stone groups have been removed and neighbouring empty cells contracted, is a clique. We iteratively fillin captured cells and even numbers of dead cells until no more fillin patterns are found. We also apply any inferior cell domination that comes from virtual connection decompositions<cit.>.We then look into the transposition table to see if the resulting state win/loss value is known, either because we previously solved, or because of color symmetry (a state which looks the same for each player is a win for Notlast). Then inferior cells are pruned. Then, using H-search <cit.>in which the or-rule is limited to combining only 2 semi-connections, we find (pre)-join-pairing strategies. Then, for X the player to move, we prune each key of every X-pre-join-strategy.H-search is augmented by observing that semi-connections that overlap on a captured set of endpoints do not conflict and so can be combined into a full connection <cit.>. Notice that augmented H-search is not complete:some pairing strategies (e.g. the mirror pairing strategy for the n×(n-1) board <cit.>) cannot be found in this way.Figure <ref> shows the start of Solrex's solution of 1.Bd1, the only unsolved 4×4 opening from <cit.>. First, inferior cells are found: White b1 captures a1,a2; a2 kills a1; b2 captures a2,a3; c2 leaves c1 dominated by b2; d2 captures d3,d4; etc. See Figure <ref>. Only 5 White moves remain: a1,c1,a4,b4,d4. After trying 2.Wa4, a White pre-join-pairing strategy is found, so this loses. Similarly, 2.Wb4 and 2.Wd4 also lose. Now 2 White moves remain: a1,a3. From 2.Wa1, search eventually reveals that 3.Bc1 wins (a2 also wins). From 2.Wc1, search reveals that 3.Ba1 wins (b2 and d4 also win). The deepest line in solving this position is 1.Bd1 2.Wc1 3.Bd4 4.Wc4 5.Bb2 6.Wa3 7.Ba4 8.Wb4.§ EXPERIMENTSWe ran our experiments on Torrington, a quad-core i7-860 2.8GHz CPU with hyper-threading, so 8 pseudo-cores. For 5×5 Rex,our test suite is all 24 replies to opening in the acutecorner:[Allopening 5×5 Rex moves lose, so we picked all possible replies to the presumably strongest opening move.] this takes Solrex 13.2s. For 6×6 Rex, our test suite is all 18 (up to symmetry) 1-move opening states: this takes Solrex 25900s. To measure speedup, we also ran the 18 1-move 6×6 openings on a single thread, taking 134635s.To show the impact of Solrex's various features, we ran a features knockout test on the 5×5 test suite. For features which showed negligible or negative contribution, we ran a further knockout test on the hardest 6×6 position, 1.White[d2], color-symmetric to 1.Black[e3].The principle variation for this hardest opening is shown in Figure <ref>. The results are shown below. Figure <ref> showsall losing moves after the best opening move on 5×5 (all opening 5×5 moves lose), and all losing opening moves on 6×6.Figure <ref> shows three new Rex puzzles we discovered by using Solrex. The middle puzzle was the only previously unsolved 4×4 position. The other two were found by using Solrex to search for positions withfew winning moves.2|c|5×5 knockout tests version   time ratio all features on 1.0 (13.9s)no dead clique cutset .97 unaugmented H-search .99 no mutual fillin 1.00 no color symmetry pruning 1.01 no VC decomp 1.06 no dead fillin 1.07 no resistance move ordering   1.62 no capture fillin 2.02 no inferior pruning 2.30 no H-search 89.83  2|c|6×6 knockout test version   time ratio all features on 1.0 (13646 s)unaugmented H-search1.10no color symmetry pruning 1.13no dead clique cutset 1.37no mutual fillin 1.44no VC decomp 1.95 § CONCLUSIONSAll features listed in the knockout tests contributed significantly to shortening search time: the four features that contributed no improvement on 5×5 boards all contributed significantly on 6×6 boards.The effectiveness of these pruning methods – which exploit pruning via local patterns in a search space that grows exponentially with board size — explained by Henderson for Hex, is clearly also valid for Rex <cit.>:In almost all cases, we see that feature contributions improved with board size. We believe this is partly because the computational complexity of most of our algorithmic improvements is polynomial in the board size, while the corresponding increase in search space pruning grows exponentially. Furthermore, as the average game length increases, more weak moves are no longer immediately losing nor easily detectable via previous methods, and so these features become more likely to save significant search time. Of these features, by far the most critical was H-search, which yielded a time ratio of about 90 on 5×5 Rex when omitted. The enormous time savings resulting from H-search is presumably because our general search method does not learn to recognize the redundant transpositions that correspond to the discovery of a (pre-) pairing strategy. So H-search avoidssome combinatorial explosion.Solrex takes about 7 hours tosolve all 18 (up to symmetry)6×6 boardstates; by contrast, Solhex takes only 301 hours to solve all 32 (up to symmetry) 8×8 boardstates <cit.>. So why is Solhex faster than Solrex?One reason is because Hex games tend to be shorter than Rex games: in a balanced Rex game, the loser can often force the winner to play until the board is nearly full. Another reason is there are Hex-specific pruning features that do not apply to Rex: for example, the only easily-found virtual connections for Rex that we know of are pairing strategies, and there seem to be far fewer of these than there areeasily-found virtual connections in Hex. Also, in Hex, if the opponent can on the next move create more than one winning virtual connection, then the player must make a move which interferes with each such connection or lose the game; we know of no analogous property for Rex.The general approach of Solhex worked well for Solrex, so this approach might work for other games, for example connection games such as Havannah or Twixt.§.§.§ Solutions to puzzles. Evans' puzzle: b1 (unique). Three new puzzles: Left: a2 (unique). Middle: Black wins; best move for White is a1, which leaves Black with only 2 winning replies (a2, c1); all other White moves leave Black with at least 3 winning replies (e.g. c1 leaves a1, b2, d4). Right: e3 (unique).§.§.§ Acknowledgments.We thank Jakub Pawlewicz for helpful comments.plain
http://arxiv.org/abs/1707.00627v1
{ "authors": [ "Kenny Young", "Ryan B. Hayward" ], "categories": [ "cs.AI", "cs.GT" ], "primary_category": "cs.AI", "published": "20170426175108", "title": "A Reverse Hex Solver" }
APS/123-QED^1 Department of Physics & Astronomy, Louisiana State University, Baton Rouge, LA 70803, USA ^2 Center for Computation & Technology, Louisiana State University, Baton Rouge, LA 70803, USA ^3 Department of Physics, Xi'An Jiaotong University, Xi'An, Shaanxi, China Recent experiments have suggested that the electron-phonon coupling may play an important role in the γ→α volume collapse transition in Cerium. A minimal model for the description of such transition is the periodic Anderson model. In order to better understand the effect of the electron-phonon interaction on the volume collapse transition, we study the periodic Anderson model with coupling between Holstein phonons and electrons in the conduction band. We find that the electron-phonon coupling enhances the volume collapse, which is consistent with experiments in Cerium. While we start with the Kondo Volume Collapse scenario in mind, our results capture some interesting features of the Mott scenario, such as a gap in the conduction electron spectra which grows with the effective electron-phonon coupling.Periodic Anderson Model with Holstein Phonons for the Description of the Cerium Volume Collapse Enzhi Li^1,2,Shuxiang Yang^1,2, Peng Zhang^3, Ka-Ming Tam^1,2, Mark Jarrell^1,2, and Juana Moreno^1,2 December 30, 2023 ===========================================================================================================§ INTRODUCTIONThe isostructural volume collapse of Cerium is a long-standing puzzle <cit.>. When a crystal of Cerium is under a pressure of 15,000 atmospheres, it undergoes a volume collapse of approximately 17% while preserving the face-centered cubic crystal structure. This transformation, called the γ→α transition, has baffled physicists since its discovery, and several leading theories have been proposed for its explanation, the most prominent of which are the Mott transition scenario <cit.> and the Kondo volume collapse (KVC) scenario <cit.>. The Mott and KVC scenarios are competing paradigms, although perhaps not as different and distinct as previously thought <cit.>. In the KVC scenario, the 4f electrons of Cerium are assumed to be localized in both phases. In the small volume α phase, the spd electrons strongly screen the local moments of the f electrons, thus rendering the α phase a Pauli paramagnet. While in the large volume γ phase, the local moments of the f electrons persist to much lower temperatures than in the α phase, indicating that the Kondo scale T_K in the γ phase is much smaller than that of the α phase, which is consistent with the experimental observations <cit.>. In the Mott transition scenario, for which the Hubbard model is a good description, the density of states (DOS) of the f electrons changes from being metallic (no gap at the Fermi level)in the α phase to insulating (with a gap at the Fermi level) in the γ phase <cit.>.This localization-delocalization of the 4f electrons, which is a metal-insulator Mott transition, is driven by the increase of the intersite hopping amplitudes of the f electrons when the unit cell volume of Cerium decreases. While there are extensive studies on the cerium volume collapse, there is no consensus on the mechanism of this transition. An overview on the cerium volume collapse can be found in Ref. <cit.>. Most of the previous models proposed consider exclusively the interplay among the spd electrons and the f electrons, whereas the possible effects from the phonons are completely ignored. A series of recent experimental results have indicated that the electron-phonon interaction may also play an important role in the γ→α transition <cit.>.Jeong et al. <cit.> estimatedthat about half of the entropy change during the transition is due to lattice vibrations.Later, Krisch et al. <cit.> showed that the significant changes in the phonon dispersion across the γ→α transition provide strong evidences for the importance of the lattice degrees of freedom. Although the precise value of the lattice vibrational entropy varies between experiments, they do agree that a significant fraction of the total entropy change during the transition is due to lattice vibrations. This calls for a revision of the previous models to incorporate the contribution from the electron-phonon coupling. Even if we focus exclusively on the electronic contribution, the full model should be more complicated than the simple single band Periodic Anderson or Hubbard models.Recent studies using density functional theory have confirmed that the f-spd hybridization is important <cit.>, while the smallerspin-orbit coupling and hybridization among the f orbitals are two other contributions which should be taken into account for a quantitative description <cit.>. Parameters extracted from ab-initio band structure calculations have also been used as inputs for many-body methods which revealstrong coupling effects which are presumably absent from the DFT, such as Kondo screening and Mott transition. These methods include dynamical mean field theory (DMFT) <cit.>, variational Monte Carlo <cit.>, and Gutzwiller projection approaches <cit.>. Constrained Random Phase Approximation have also been applied to estimate the coupling terms <cit.>. Unfortunately, there is no well developed method to incorporate the electron-phonon interaction into these frameworks. Moreover there is no appropriate formalism to include dynamical phonons with non-trivial dispersions within the DMFT, as it involves effective non-local electron-electron interaction. A quantitative study of electron-phonon coupling with the accuracy on par with that of a computation with only electron-electron interactions is the ultimate goal but not practically feasible at present. In light of the difficulties on the modeling of electron-phonon coupling, we sought a model which issimple enough to handle computationally but nevertheless capture the first order transition.Electron-phonon coupling is then incorporated into the model and its effects are studied in some detail. To attain the above goal we consider phonons within theKondo volume collapse scenario. This is in line with the original approach by Allen and Martin <cit.>, who studied the electronic and the bulk modules contributions to the free energy. In their study, the electronic part is modeled by a single impurity model. In this work, we use the periodic Anderson model with Holstein phonons coupled to the conduction band as our starting point <cit.>. We solve this model using the DMFT approximation with the continuous time quantum Monte Carlo as our impurity solver. We then use the maximum entropy method to extract the density of states (DOS) of the conduction electrons, and study the evolution of the DOS with varying parameters. Our main finding is that the electron-phonon interaction can significantly enhance the volume collapse, and associated with this collapse, a metal-insulator transition emerges. Although we start our model with the Kondo scenario in mind, yet a Mott metal insulator transition that manifests itself by the formation of a gap is observed. Thus, our work may pave the way for the unification of the competing Kondo and Mott scenarios, a unification that is already lurking in some previous works <cit.>. The structure of the paper is as follows. In section II, we briefly describe our model Hamiltonian and the methods we use to solve it. In section III, we present our results for the pressure-volume curves, the behavior of the DOS across the phase transition, and discuss the relationship between the volume collapse and the Mott transition. We presnt our conclusion in section IV. § MODEL AND METHODIn order to study the influence of the phonons on the Cerium volume collapse, we have here employed the periodic Anderson model with electron-phonon interaction, which isĤ = Ĥ_0 + Ĥ_I Ĥ_0 =-t∑_⟨ i, j ⟩, σ (c_i,σ^† c_j,σ + c_j,σ^† c_i,σ) + ϵ_f∑_i, σ f_i, σ^†f_i, σ+ V ∑_i,σ (c_i,σ^†f_i,σ + f_i,σ^†c_i,σ) + ∑_i(P_i^2/2m + 1/2k X_i^2)Ĥ_I =U∑_i n_i,↑^fn_i,↓^f + g∑_i,σ n_i,σ^c X_i .where c_i, σ^†, c_i, σ (f_i, σ^†, f_i, σ) creates and destroys a c(f) electron of spin σ at lattice site i, respectively. P_i and X_i are the phonon momentum and displacement operators. Here, we have used dispersionless Einstein phonons with frequency Ω_0 = √(k/m). The parameter g measures the electron-phonon interaction strength, U is the Hubbard repulsion between localized f-electrons, and V characterizes the hybridization between conduction- and f-electrons. With the parameters g andk, we construct the effective electron-phonon interaction strength, U_eff = g^2/2k.Throughout this paper and to be consistent with the experimental results, we have set Ω_0 =0.01 <cit.> unless otherwise specified.To preserve the large temperature metallic phase, we fix the total electronic density at n=1.8 by tuning the chemical potential at each iteration of the DMFT cycle. We also choose an appropriate value for ϵ_fso that n_f=1 when T=0.1 to ensure that a local moment is present at high temperature.We setU = 4, however, the precise value of U is not crucial as we have found qualitatively similar results for other values of U. Since the strength of U ( = 4) which measures the Hubbard repulsion strength between f electrons is significantly larger than the values of ϵ_f (≈ -0.1) we use, most of the time, f electron filling number is quite insensitive to the variations of ϵ_f. For most of the parameters that we have scanned, we simply set ϵ_f = -0.15 and assure ourselves that the f filling will almost always be nearly 1.0 when β = 10.We use a hypercubic lattice with Gaussian bare DOS, and consider its bandwidth as our unit of energy.We set this unit to be the Fermi energy ϵ_F of Cerium, which is estimated to be 0.52eV <cit.>. We propose this simplified model as our first attempt to incorporate the electron-phonon interaction into the study of the Cerium volume collapse. We neglect the Hubbard repulsion in thec-band because it is much smaller than the Hubbard repulsion in the 4f band. Since Amadon and Gerossier <cit.> found that the value of theinter-site hopping between 4f electrons is less than a third of the value of the hybridization between 4f and conduction electrons, we also neglect the 4f-electron inter-site hopping in our model Hamiltonian. Our goal is to construct a minimal model which displays Kondo physics and investigate the effect of introducing the electron-phonon coupling in the model. While this model may not provide a quantitative description of Cerium,it is interesting by itself as it represents an important class of many-body problems which include localized levels coupled tocorrelated conduction bands <cit.>. This kind of models has not been extensively studied in the literature largely due to its complexity and its associated computational challenges. Our model is particularly challenging due to the fact thatthe correlations in the conduction band dueto electron-phonon coupling are retarded <cit.>. The competition and cooperation among the Kondo effect, the Ruderman-Kittel-Kasuya-Yosida interaction, and the correlation effect in the conduction band produces a very rich phase diagram <cit.>. With the goal of understanding the volume collapse transition, in this work wefocus exclusively on the Kondo regime and investigate the effect of the electron-phonon coupling in the conduction band. In particular we calculate the total free energy to construct the phase diagram and demonstrate the first order phase transition. We find that even though our electronic model agrees with the Kondo volume collapse scenario in the absence of phonon coupling, the introduction of phononsinduces several interesting phenomena reminiscent of the Mott transition scenario. We solve this model using the dynamical mean field theory  <cit.>, with the continuous time quantum Monte Carlo (CT-QMC) <cit.> as our impurity solver. Since we are using a hypercubic lattice in our DMFT approximation, the bare electron DOS is Gaussian.Due to the fact that cerium does not display a sharp feature in the DOS near the Fermi surface, such as a flat band or a van Hove singularity,it is unlikely thatthe choice of DOS will affect our goal of investigating the qualitative effect of electron-phonon coupling. Finally, we use the maximum entropy method <cit.> to extract the spectral functions from Monte Carlo simulation data. § RESULTS In this section we first draw the pressure-volume (p- V) phase diagram by calculating the total free energy including the contribution from the bulk modulus. We find that the first order phase transition emerges when the electron-phonon interaction is large.For the same set of model parameters, we do not find a first order transition, even at much lower temperatures, in the absence of electron-phonon coupling.After constructing the phase diagram, we investigate the phase transition in moredetail by calculating the evolution of the spectral function of the conduction band across the transition. Although we employ the periodic Anderson model, the paradigm for the KVC scenario, our results display features of a Mott transition in the conduction band due to the electron-phonon coupling. In the parameter regime whereV is small, as U_eff increases, the DOS of the c-electrons gradually develops a gap at the Fermi level with a width proportional to U_eff. The gap-opening in the DOS, and its proportionality to U_eff mimics that of the Mott transition in the Hubbard model. The gap does not occur when V dominates over U_eff, which compels us to argue that there is a competition between V and U_eff in our model <cit.>.§.§ Pressure-Volume Diagram and the Bulk ModulusSince the γ→α transition is first order, the pressure versus volume curve develops a kink as the temperature drops below the transition point. To properly account for the static lattice contribution, we introduce a volume and temperature dependent bulk modulus term into the p- V relation <cit.>. Therefore, the total pressure contains two parts, the pressure due to the electrons which we denote as p_e, and the pressure due to the bulk modulus term which we denote as p_B.We calculate p_e from the electronic free energy by the relation p_e = -∂ F/∂ V, and p_B by integratingthe bulk modulus B = - V∂ p_B/∂ V, where V is the volume. We calculate the electronic free energy using the formula F_e (T = T_0,V =V_0, N)= ∫_0^Nμ dN + F(T_0,V_0, N=0). Here, we choose not to use the entropy formula employed in Ref. <cit.> because the statistical error in our results become large at high temperatures. When we plot the free energy versus hybridization V, we notice that the curve continuously evolves from a nearly flat plateau at small V to a nearly straight line with a negative slope at large V, as shown in Fig. <ref>.For the parameter regime that we have scanned for this model, we did not find a point where the curvature of the electronic free energy versus volumecurve changes sign, which is an indicator for the emergence of first order phase transition.It is this finding that motivated us to introduce the free energy due to bulk modulus to fully describe the γ→α transition using our model, which will be discussed in detail later. With the observation of the curve shapes in Fig. <ref>, we conjecture that that the derivative of the free energy with respect to V can be approximately fitted to the function -k(1 + tanh a(V-c)), with k, a, c being positive parameters. Integration of the derivative gives a function to which we can fit our free energy data:F_e(V) = -k(V -c + 1/alog 2cosh a (V - c)) + d. By fitting our numerical data to Eq. (<ref>), we obtain the values of the parameters k, a, c, d. Since the free energy depends on the temperature, these parameters also depend on it. Now, we can obtain the volume dependence of the free energy using the empirical relationship between hybridization V and volume V, V = b/ V^2 <cit.>.We calculate the electronic pressure as p_e = -2kb/ V^3( 1 + tanh a(b/ V^2 - c) ). The experimental value of b can be estimated from the relation V = b/ V^2, and J ∝V^2/U<cit.>, where J is the Kondo exchange. The experimental values of J range between 0.2-0.3eV in the α phase and 0.05-0.06eV in the γ phase <cit.>. From the values of J and the relation between J and thevolume, we can estimate the value of b to be between 0.89 and 1.55 in our unit system.The second contribution to the total pressure comes from the bulk modulus. From the experimental results of Ref. <cit.>, we assume that the bulk modulus depends upon the volume asB = B_0(T) e^α(1- V/ V_0),and upon the temperature through the relation <cit.>B_0(T) = B_0(1 + e^-T_0/T),where, B_0, T_0,V_0 and α are material-dependent parameters. Integration of the bulk modulus gives us the pressure p_B as p_B( V, T) = p_0(T) - B_0(T)∫_1^ V/ V_0dx e^α(1-x)/x,where p_0(T) is an arbitrary constant that may depend on the temperature. Throughout the paper, we have set p_0 = 0. Adding the bulk modulus and the electronic pressures yields a p- V graph which exhibits a kink structure.Fig. <ref> shows the pressure versus volume diagrams for different values of U_eff.When U_eff = 1, as we lower the temperature, a kink structure begins to develop. We identify β = 6 as the critical temperature where the kink structure begins to emerge. Experimentally, the ratio between the γ→α transition critical temperature T_c and the temperature T' where the volume collapse of 17% occurs is T_c/T' = 460/334 <cit.>. Using the same ratio, we can identify the T' in our model to be approximately 1/8. From the iso-thermal p- V diagrams, we find that the volume collapse in our model at β = 8 is about 30%, a result that is in reasonable agreement with the experiments, considering that we are using a highly simplified model. From the Maxwell construction, we can read off from the β = 8 iso-thermal line the volumes for the γ and α phases, with V_α = 0.78, and V_γ = 1.13. We further estimate the corresponding hybridization value for these two phases to be V_α = 1.93, and V_γ = 0.92. On the other hand, when U_eff = 0 (inset on Fig. <ref>), even though we have used the same set of parameters, the kink structure that is the indicator for the emergence of a first order phase transition is absent. Note that the small upturn in the p- V diagram at large volume can also be eliminated once we consider the volume dependence for the hopping term t in the conduction band.Similar results can be obtained withmany different combinations of parameters. In the data displayed in Fig. <ref> we set b = 1.18, p_0=0, T_0=0.1, B_0 = 12.47, and α = 4.225.The value of b is within our estimated range. A value ofT_0=0.1 is approximately 600 K within our units, a value comparable to the critical point temperature of thetransition. Following Ref. <cit.>, we use B_0 = 28GPa,V_0 = 36Å^3 as the bulk modulus and unit cell volume for Cerium in the γ phase.Once we use the Fermi scale as our unit of energy and set V_0 as our unit of volume, the unit of pressure becomes ϵ_F/ V_0 = 2.3GPa.This justifies our usage of the value B_0 = 12.47 as our bare bulk modulus.And, finally, the experimental values of α range between 2 and 5 for most bulk pure metals <cit.>.In summary, for a large range of parameters, if theelectron-phonon coupling (U_eff = 1) is finite, there is a clear first order transition in the p-V diagram with a critical temperature around 1/6. However, for the same set of parameters, when the electron-phonon interaction is absent (U_eff=0),the transition is not seen for temperatures down to 1/20.The different behavior of the p- V diagram for U_eff = 0 and U_eff = 1 implies that the electron-phonon interaction enhances the γ→α volume collapse transition.§.§ Spectral Functions of the Mott Metal-Insulator TransitionThe phonon-enhanced first order phase transition can be interpreted as a Mott metal-insulator transition. We can understand this by studying the evolution of the spectral functions with respect to the variation of the relative strengths of V and U_eff. When the hybridization is small, the electron-phonon interaction can significantly modify the density of states (DOS) of the conduction electrons. For V = 0.1, as U_eff increases from 0 to 1.1, the DOS changes from a nearly Gaussian to a DOSgapped at the Fermi energy, as shown in Fig. <ref>. The gap in the conduction electron spectral function is not an artifact of the maximum entropy method sincewe can also see the effect of U_eff on the gap at the Fermi energy by observing the behavior of the local c-electron Green's function G_c(τ) (inset of Fig. <ref>), which is directly measured in the Monte Carlo simulation.When there is no gap, the value of G_c(τ = β/2) is finite. However, when there is a gap at ω = 0, the value ofG_c(τ = β/2) decays to zero exponentially. Moreover, the wider the gap, the more rapid the decay. When we plot the G_c(τ) for different values of U_eff, we see clearly that with increasing U_eff, G_c(τ) decreases increasingly rapidly when τ approaches β/2.The magnitude of the phonon frequency Ω_0 and the c-electron filling have profound influences on the nature of the Mott transition due to U_eff. When Ω_0 is small, which is the case we are studying, the transition seems to be continuous or at most weakly discontinuous <cit.>. The reason is that when Ω_0 = 0, we can integrate out the Holstein phonons in the conduction band to obtain a generalized Falicov-Kimball model (A Falicov-Kimball modelwith c-f hybridization) in which the conduction electrons always exhibit a non-Fermi liquid behavior for any non-trivial filling number <cit.>. TheMott transition from a non-Fermi liquid metal to an insulator is continuous due to the absence of the quasi-particle peak at the Fermi energy in the conduction electron DOS  <cit.>. Notice here weare considering only the transition in the electron-phonon system with model Hamiltonian given byEq. <ref>. By including the static lattice contribution thistransition becomes first order as we discussed earlier.A small phonon frequency is chosen in this study because of the low Debye temperature observed in experiments <cit.>.The low phonon frequency greatly reduces the pairing instability. With this phonon frequency and a small enough hybridization between the conduction and the localized 4f electrons, we find that the charge density wave (CDW) instability always dominates, which is consistent with previous results using the Holstein model and DMFT <cit.>. The opening of the Mott gap at the Fermi level is present only when the hybridization between conduction band and localized electrons is weak compared with the electron-phonon coupling.Fig. <ref> shows that when the hybridization is strong, the opening of the Mott gap is prohibited. In this parameter regime, the introduction of the electron-phonon interaction has little effect on the behavior of the conduction electron DOS. Since the filling number of the c-electrons is set to 0.8, the hybridization cannot induce a gap at the Fermi energy <cit.>, and thus the DOS is always finite irrespective whether there is electron-phonon interaction or not. Consequently, the c-electrons are always metallic in the large V regime. The absence of the Mott gap in the large V regime signals that the electron-phonon interaction effect is suppressed by the hybridization. Since the electron charge susceptibility is positively correlated with U_eff, the suppression of the electron-phonon coupling effect is also reflected in the decrease of the charge susceptibility as V increases for fixed β (not shown).As V increases from 0.1 to 1.8, the localized f electron moments that are present at small V get screened by the conduction electrons when V is large <cit.>. The screening of the localized f electron moments in the large V regime is a signature of the Kondo effect, which we can also observe by studying the f electron spectral functions. Fig. <ref> shows the f electron spectral functions for small V (γ phase) and large V (α phase) at β = 10. As we can see in the figure, when V is small, Kondo temperature T_K is much smaller than β = 10, and the Kondo resonance is nowhere to be found in the f electron spectral curves. However, as V increases, Kondo temperature T_K also gets larger, which we can see from the Kondo resonances that emerge for large V. The absence of the Kondo resonance for small V and its emergence at large V is consistent with the experimental observation that T_K at γ phase is much smaller than T_K at α phase. We also notice that the f electron spectral curve gets kinky at ω = 0 when V = 1.8. The strength of this kink is proportional to the effective electron-phonon interaction strength U_eff, and thus we conjecture that the competition between c-f hybridization V and U_eff could lead to some exotic behavior in the f electron spectral functions. The nature of this kink is still under study. At the same time, when we scan V from V = 0.1 to V = 1.8, the conduction electrons make a transition from insulator to metal. The c-electron spectral functions for β = 10 and U_eff = 1 with varying values of V are shown in Fig. <ref>, where a gap at the Fermi energy is clearly visible for V < 0.6.When V > 0.6, the Mott gap evolves into a depression which disappears completely for V > 1.2. Therefore the Mott metal-insulator transition is present only when the electron-phonon interaction is strong enough. § CONCLUSIONIn brief, using the periodic Anderson model with phonon coupled to the conduction band, and by introducing a volume and temperature dependent bulk modulus contribution to the total pressure, we find a first order phase transition where the Cerium volume collapses. This transition is enhanced by the presence of the electron-phonon interaction; with other parameters being fixed, we do not find the first order transition in the absence of the electron-phonon interaction. Our findings support recent experimental results showing that phonons play an important role in the volume collapse transition in Cerium.Moreover, we find that our model, although originally conceived with the Kondo volume collapse scenario in mind, exhibits interesting features of the metal to insulator transition, e.g., a gap proportional to the effective electron-phonon interaction, U_eff,opens at the Fermi energy at low temperature. An obvious improvement over the present work is to include other contributions to the model. The spin orbit coupling and the hybridization in the f-band can be considered in the DMFT calculation, however a more realistic electron-phonon coupling is currently beyond the formalism of DMFT since this requires a numerical method which can handle long range coupling properly. Acknowledgements. This material is based upon work supported by the National Science Foundation under the NSF EPSCoR Cooperative Agreement No. EPS-1003897 with additional support from the Louisiana Board of Regents.MJ was also supported by the NSFMaterials Theory grant DMR1728457. Computer support is provided by the Louisiana Optical Network Initiative, and by HPC@LSU computing. We would like to thank F.F. Assad for the continuous time quantum Monte Carlo programs that he shared with us. M. J. designed and implemented the maximum entropy method algorithm for extracting real-frequency data from imaginary frequency results. apsrev
http://arxiv.org/abs/1704.08684v6
{ "authors": [ "Enzhi Li", "Shuxiang Yang", "Peng Zhang", "Ka-Ming Tam", "Mark Jarrell", "Juana Moreno" ], "categories": [ "cond-mat.str-el" ], "primary_category": "cond-mat.str-el", "published": "20170427175451", "title": "Periodic Anderson model with Holstein phonons for the description of the Cerium volume collapse" }
Fractional Generalized KYP Lemma forFractional Order System within Finite Frequency Range Xiaogang Zhu, and Junguo Lu Junguo Lu is with the School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, 200240 China . Received; accepted ==================================================================================================================================================================================The celebrated GKYP is widely used in integer-order control system. However, when it comes to the fractional order system, there exists no such tool to solve problems. This paper prove the FGKYP which can be used in the analysis of problems in fractional order system. The H_∞ and L_∞ of fractional order system are analysed based on the FGKYP. § INTRODUCTIONOne fundamental research approach in control system is called the frequency-domain method. To the view of frequency-domain, the objective of designing a control system is to find an appropriate controller which makes the system satisfying some frequency response qualities. The celebrated Kalman-Yakubovich-Popov (KYP) lemma bridges between the frequency-domain methods and time-domain methods. The KYP lemma originates from Popov's criterion <cit.>, giving a frequency condition for stability of a feedback system, and then was proved by Kalman <cit.> and Yakubovich <cit.> that Popov's frequency condition was equivalent to existence of a Lyapunov function of certain simple form. It has been regarded as one of the most basic tools in control systems because it only needs to check one matrix in linear matrix inequality (LMI) instead of checking the entire frequency range in frequency domain inequality (FDI). The KYP lemma <cit.> states that, given matrices A, B, and a Hermitian matrix M, for ∀ω∈ℝ∪{∞}, the following inequlity[ (jω I-A)^-1B;I ]^* M [ (jω I-A)^-1B;I ] <0holds if and only if there exists a Hermitian matrix P such that[ A B; I 0 ]^* [ 0 P; P 0 ][ A B; I 0 ]+M<0 However, the KYP lemma has its limitation when it's applied for practical control problems. Generally, practical control problems require systems can satisfy different performance index for different frequency range. So, the KYP lemma is not compatible with the practical requirement. Iwasaki <cit.> points out that most practical control problems, including digital filter design, sensitivity-shaping, open-loop shaping and structure/control design integration, only need to be analysed in certain frequency range. It's because many practical signals concentrate the energy in one or some finite frequency range. For example, most energy of seismic wave is concentrated in frequency range 0.3-8Hz <cit.>.In order to analyse control problems within finite frequency range, classic methods can be divided roughly into three ways<cit.>: classical control theory, frequency-weighted method <cit.> and analysing control problems within finite frequency range directly. The classical control theory, including PID (Proportion integration differentiation) and root locus, is mainly focus on the zero-pole point. But it mainly solves problems of linear SISO (Single Input Single Output) systems and is mostly dependent on experience. The essential approach of frequency-weighted method is this method transforms the original system, which has the control problem within finite frequency range, into a complex system, which has the control problem within infinite frequency range. However, this method doesn't solve control problems within finite frequency range directly and depends mostly on experience. The third method, analysing control problems within finite frequency range directly, is now the major method to analyse such problems. It mainly includes Gramian <cit.> and generalized KYP. Iwasaki developed the KYP lemma into generalized KYP (GKYP) lemma in 2005 <cit.>.The GKYP lemma consider the finite frequency intervals which is flexible for various frequency ranges. With the development of convex optimization, LMI is successfully and widely applied in control system <cit.>. GKYP plays a very importantrole in transforming control problems into convex optimization. Even though there exist numerous researches utilizing the GKYP lemma, most of them are confined within the integer-order system <cit.>. Therefore, the main purpose of this paper is to generalize the GKYP. We will prove the fractional generalized KYP (FGKYP) which can be utilized in the fractional order system (FOS).To the best of our knowledge, there exists no research on the proof of FGKYP, but some papersbase their research on the GKYP. Liang et al. are the first to use GKYP to solve H_∞ of fractional order system<cit.>. But they partly prove that GKYP can be used in the fractional order system and they give a sufficient condition for H_∞ of FOS with fractional order (0,1). Then, Sabatier et al. improve the condition of linear matrix inequality (LMI), reducing the number of variables<cit.>. In the most recent time, H_∞ output feedback control problem of linear time-invariant FOS over finite frequency range is studied by Wang et al., based on the GKYP<cit.>, but they utilize the GKYP directly.Similarly to the integer-order system<cit.>, it's significant to go a step further in the research of FGKYP in the reason that FGKYP can be used conveniently to solve many kinds of problems in fractional order systems. In this paper, we'll utilize the FGKYP to solve H_∞ and L_∞ of FOS. Sabatier et al. are the first to compute H_∞ norm of FOS, and they use different kinds of methods to compute <cit.>. H_∞ of FOS is also used in other different problems, such as design of state feedback controller<cit.>, model match<cit.> and model reduction<cit.>.This paper is organized as follows.In section II, the FOS model and the problem are stated. In section III, S-procedure is introduced to bridge between matrix inequality and frequency range. In section IV, FGKYP for L_∞ of FOS is proved and L_∞ of FOS with finite frequency is studied. In section V, FGKYP for H_∞ of FOS is proved and H_∞ of FOS with finite frequency is studied. In section VI, numerical examples are given. Finally, in section VII, a conclusion is given.ν is the order of the fractional order system (FOS), φ=π2(ν-1). For a matrix A, its transpose, complex conjugate transpose are denoted by A^T and A^∗, respectively. For matrices A and B, A⊗ B means the Kronecker product. The conjugate of x is denoted by x. ℝ and ℂ denote real number and complex number, respectively. ℝ^+={x: x∈ℝ, x≥0}. For s∈ℂ, Re(s) denotes the real part of s and Im(s) denotes the imaginary part of s. The convex hull and the interior of a set 𝒳 are denoted by co(𝒳) and int(𝒳), respectively. ℋ_n stands for the set of n× n Hermitian matrices. For a matrix X∈ℋ_n, inequalities X>(≥)0 and X<(≤)0 denote positive (semi)definiteness and negative (semi)definiteness, respectively. The set 𝒥 denotes matrices J=J^∗≤0. Sym(A) stands for A+A^∗. The null space of X is denoted by X_⊥,i.e., XX_⊥=0_n. For A∈ ℂ ^n× m and B∈ℋ_n+m, a function ρ: ℂ ^n× m×ℋ_n+m→ℋ_m is defined byρ(A,B)≜[ [ A; I_m ]]^∗B[ [ A; I_m ]] § PRELIMINARIES§.§ Fractional Order System(FOS) Model In this paper, the FOS is considered as follows{[ D^νx(t)=Ax(t)+Bu(t);y(t)=Cx(t)+Du(t) ].where x(t)∈ ℝ ^n is the pseudo state vector, u(t)∈ ℝ ^n_u is the control vector, y(t)∈ ℝ ^n_y is the sensed output, ν is the order of the fractional order system and 0<ν<2. A,B,C,D are constant real matrices. D^ν is the fractional differentiation operator of order ν. If the FOS is relaxed at t=0, transfer function matrix between u(t) and y(t) isG(s)=C(s^νI-A)^-1B+D §.§ Problem Statement Motivated by finite frequency problems of digital filter design, sensitivity-shaping, et.al, Iwasaki and Hara developed the KYP Lemma into the GKYP Lemma <cit.>. The KYP lemma can check infinite frequency domain inequality (FDI) via linear matrix inequality (LMI), whereas the GKYP Lemma can check finite FDI via LMI. Given matrices A,B, and a Hermitian matrix Π and ∀ω∈ ℝ ∪{∞}, the infinite FDI is described as[ [ (jω I-A)^-1B;I ]]^∗Π[ [ (jω I-A)^-1B;I ]]<0 When it comes to FOS, it also exits problems which should be solved in infinite frequency domain. Given matrices A,B and a Hermitian matrix Π and ∀ω∈ ℝ ∪{∞}, the infinite FDI of FOS is described as[ [ ((jω)^νI-A)^-1B; I ]]^∗Π[ [ ((jω)^νI-A)^-1B; I ]]<0Where ν is the fractional system order of the system.As for FOS, we also want to check the finite FDI via LMI.§ S-PROCEDURE AND FREQUENCY RANGE S-procedure is stated as the following. Given Π,F∈ℋ_q, we get the equivalenceη^∗Πη≤0∀η∈ ℂ ^q such that η^∗Fη≥0 ⇔∃δ∈ ℝsuch that δ≥0,Π+δ F≤0 Where the regularity, F≰0, is assumed. The strict inequality versionη^∗Πη<0 ∀η∈ ℂ ^q such that η^∗Fη≥0 ⇔∃δ∈ ℝsuch that δ≥0,Π+δ F<0The purpose of the S-procedure is to replace the former condition by the latter condition because the latter condition is easier to verify. As for FDI, the S-procedure bridge between the matrix inequality and frequency range.In order to generalize the above S-procedure, paper<cit.> rewrites them with different notationtr(Π𝒢_1)≤0⇔(Π+ℱ)∩𝒥≠∅ tr(Π𝒢_1)<0⇔(Π+ℱ)∩ int(𝒥)≠∅where[ ℱ≜{δ F:δ∈ ℝ , δ≥0, F∈ℋ_q}; 𝒢(ℱ)≜{G∈ℋ_q:G≠0, G≥0, tr(ℱG)≥0};𝒢_1(ℱ)≜{G∈𝒢(ℱ) :rank(G)=1} ] Paper<cit.> has already proved the lossless conditon for S-procedure as following.First, the meaning of admissible, regular and rank-one separable is given as follows.A set ℱ⊂ℋ_q is said to be * admissible if it is a nonempty closed convex cone and int(𝒥)∩ℱ=∅;* regular if 𝒥∩ℱ={0};* rank-one separable if 𝒢=co(𝒢_1). <cit.> Let an admissible set ℱ ⊂ℋ_q be given and define 𝒢_1 by (<ref>). Then, the strict S-procedure is lossless, i.e. (<ref>) holds for an arbitrary Π∈ℋ_q, if and only if ℱ is rank-one separable. Moreover, assuming that ℱ is regular, then the nonstrict S-procedure is lossless, i.e. (<ref>) holds for an arbitrary Π∈ℋ_q, if and only if ℱ is rank-one separable. This lemma shows that when we choose an appropriate ℱ, which is rank-one separable, the S-procedure will be lossless regardless of the choice of Π. Paper<cit.> also gives some examples of admissible, regular and rank-one separable sets, which are readily proved.ℱ_X≜{[ [ 0 X; X 0 ]]:X∈ℋ_n}ℱ_XY≜{[ [ -YX;XY ]]:X,Y∈ℋ_n,Y≥0}<cit.> Let ℱ⊂ℋ_m be a rank-one separable set. Then the set N^*ℱN+𝒫 is rank-one separable for any matrix N∈ℂ^m× n and subset 𝒫⊂ℋ_n of positive-semidefinite matrices containing the origin.In general, a frequency range can bevisualized as a curve (or curves) on the complex plane. Paper<cit.> define a curve as the following. A curve on the complex plane is a collection of infinitely many points θ(t)∈ ℂ continuously parametrized by t for t_0≤ t≤ t_f where t_0,t_f∈ ℝ ∪{±∞} and t_0<t_f. A set of complex numbers Θ⊆ ℂ is said to represent a curve (or curves) if it is a union of a finite number of curve(s). With Δ,Σ∈ℋ_2 being given matrices, Θ is defined as:Θ(Δ,Σ)≜{θ∈ ℂ | ρ(θ,Δ)=0, ρ(θ,Σ)≥0}Note that the set Θ(Δ,Σ) is the intersection of Θ(Δ,0) and Θ(0,Σ). It can readily be verified that the set Θ(Δ,0) represents a curve if and only if (Δ)<0.<cit.> Consider the set Θ(Δ,Σ) in (<ref>) and suppose it represents curves on the complex plane. Then the set Θ(Δ,Σ) is unbounded if and only if Δ_11=0 and Σ_11≥0. Let Δ,Σ∈ℋ_2 be given. Suppose (Δ)<0, then, there exits a common congruence transformation such thatΔ=T^∗Δ_0T Σ=T^∗Σ_0T Δ_0≜[ [ 0e^jφ; e^-jφ 0 ]]Σ_0≜[ [ αβ e^jφ; β e^-jφ γ ]]where α,β,γ∈ ℝ and T∈ ℂ ^2×2. In particular, α and γ can be ordered to satisfy α≤γ. Before we prove this Lemma, we prove the following Lemma first. Let Y∈ℋ_2 be given. Then, Y admits the following factorization:Y=L^∗[ [ αβ e^jφ; β e^-jφ γ ]]Lwhere α,β,γ∈ ℝ, L∈ℒ withℒ≜{ Q^∗ZQ:Z∈ ℝ ^2×2,(Z)=1 }Q≜[ [ 1 0; 0 je^jφ ]].In particular, α and γ are the eigenvalues of real matrix Y_0 ≜[ [ x y; y z ]].Choose Y asY=[ [ x(β+jy)e^jφ; (β-jy)e^-jφ z ]]where β,x,y,z∈ ℝ .ThenY_0=Q(Y-[ [ 0β e^jφ; β e^-jφ 0 ]])Q^∗Since Y_0 is real symmetric, the spectral factorization of Y_0 givesY_0=Z^T[ [ α 0; 0 γ ]]Zwhere the columns of Z^T are eigenvectors and α and γ are eigenvalues. Moreover, Z can be chosen to satisfy (Z)=1. Then, L≜ Q^∗ZQ belongs to ℒ. Now, from (<ref>) we getY =Q^-1Y_0Q^∗-1+β[ [ 0e^jφ; e^-jφ 0 ]] =L^∗[ [ α 0; 0 γ ]]L+β[ [ 0e^jφ; e^-jφ 0 ]]Finally, it can readily be verified thatL^∗[ [ 0e^jφ; e^-jφ 0 ]]L=[ [ 0e^jφ; e^-jφ 0 ]]holds for any L∈ℒ.Therefore, we now can obtain the resultY=L^∗[ [ αβ e^jφ; β e^-jφ γ ]]L Since (Δ)<0, there exists a nonsingular matrix K such thatΔ=K^∗[ [ 0e^jφ; e^-jφ 0 ]]Kholds. Let Y≜ K^∗-1Σ K^-1, then we getΣ=K^∗L^∗[ [ αβ e^jφ; β e^-jφ γ ]]LKTherefore, the Lemma <ref> is proved by defining T≜ LK. Since α,γ are the eigenvalues of Y_0, they can be ordered so that α≤γ. <cit.> Let Δ_0,Σ_0∈ℋ_2 and nonsingular T∈ ℂ ^2×2 be given. Define scalars a,b,c and d and function E(s) by[ [ a b; c d ]]≜ T E(s)≜ b-dscs-aThen, the following holds true{θ∈ ℂ :θ∈Θ(T^∗Δ_0T, T^∗Σ _0T), cθ+d≠0}={E(s)∈ ℂ :s∈Θ(Δ_0,Σ_0), cs≠a}This Lemma shows that Θ(T^∗Δ_0T,T^∗Σ_0T) represents curve(s) if and only if Θ(Δ_0,Σ_0) dose so. When Θ(Δ,Σ)=Θ(T^∗Δ_0T,T^∗Σ_0T), Θ(Δ,Σ) represents curve(s) if and only if Θ(Δ_0,Σ_0) dose so. Now, we examine the set Θ(Δ_0,Σ_0) with Δ_0 and Σ_0 defined in (<ref>). Note that ρ(θ,Δ_0)=0 holds if and only if θ=j^νW for some W ∈ ℝ. For such θ, ρ(θ,Σ_0)=α W^2+γ. Θ(Δ_0,Σ_0) represents curve(s) on the complex plane, thus ρ(θ,Σ_0)≥0 holds for some W ∈ ℝ, which implies either 0≤α≤γ or α<0<γ. Let Δ,Σ∈ℋ_2 be given and define the set Θ(Δ,Σ ) by (<ref>). For FOS, the set Θ(Δ,Σ) represents curve(s) on the complex plane if and only if the following two conditions hold: * (Δ)<0* either 0≤α≤γ or α<0<γwhere α,β and γ are defined in (<ref>). Frequency ω≥0 and let ω belong to the principal Riemann surface <cit.>, thus ω^ν∈ℝ^+. Let W=ω^ν, then W(ω) is a monotone increasing function. It's obvious that the set Θ can represent a certain range of the frequency variable θ. For the continuous-time setting, we get Δ=[ [ 0e^jφ; e^-jφ 0 ]]Θ={(jω)^ν:ω∈Ω}where Ω is a subset of real numbers which is specified by an additional choice of Σ, for example, as follows:c]|c|c|c|c|LF MF HF Ω 0≤ω≤ω_L 0≤ω _1≤ω≤ω_2 ω≥ω_H≥0 Σ [ [ -10;0 ω_L^2ν ]] [ [-1 ω_c; ω_c -ω_1^νω_2^ν ]] [ [ 1 0; 0 -ω_H^2ν ]]where ω_c=j^ν(ω_1^ν+ω_2^ν)2, and LF, MF, HF stand for low, middle, high frequency ranges, respectively. We now can see that the main technical steps to arrive at the FGKYP lemma for finite frequency FOS are to choose an appropriate set ℱ. § FGKYP FOR L_∞ NORM OF FOS§.§ Main Theorem For the FDI in (<ref>), the set 𝒢_1 should be given as𝒢_1={ηη^∗:η=[ [ ((jω)^νI-A)^-1B; I ]]ζ, [ ζ∈ ℂ ^m,ζ≠0; ω∈ ℝ^+ ∪{∞} ]} This set can be described as𝒢_1={ηη^∗:η∈ℳ_θ, θ∈Θ} ℳ_θ ≜{η∈ ℂ ^n+m:η≠0,Ξ_θNη=0}where Θ≜ (jℝ^+)^ν∪{∞} andΞ_θ≜{[ [I_n -θ I_n](θ∈ ℂ ); [0 -I_n](θ=∞) ]., N≜[ [ A B; I_n 0 ]] Therefore, when Θ is defined in (<ref>), Θ is defined asΘ≜{c]cc Θ, if Θ is bounded Θ∪{∞}, otherwise . Now, the main steps to obtain FGKYP lemma for FOS are to choose an appropriate set ℱ in (<ref>) and then express 𝒢_1 in (<ref>) as in (<ref>), which should led to the result that the S-procedure is lossless. <cit.> Let Δ_0, Σ_0∈ℋ_2 and a nonsingular matrix T∈ℂ^2× 2 be given and define Δ,Σ∈ℋ_2 by (<ref>). Consider Ξ_θ in (<ref>), Θ(Δ,Σ) in (<ref>) and Θ(Δ,Σ) in (<ref>). Suppose Θ(Δ,Σ) represents curve(s). The following conditions on a given vector ψ∈ℂ^2n are equivalent.i) Ξ_θψ=0 holds for some θ∈Θ(Δ,Σ).ii) Ξ_s(T⊗ I)ψ=0 holds for some s∈Θ(Δ_0,Σ_0).<cit.> Let F,G be complex matrices of the same size. ThenFG^∗+GF^∗=0if and only if there exists a matrix U such that UU^∗=I and F(I+U)=G(I-U). From lemma <ref>, we get the following lemma.Let f,g∈ℂ^n and g≠0. Thene^-jφfg^∗+e^jφgf^∗=0⇔ f=(jω)^νg for some ω∈ℝ^+Let F=f,G=e^jφg and (1-U)/(1+U)=jω^ν in Lemma <ref>, then we get the desired result.Let Δ_0,Σ_0 be defined in (<ref>), Θ in (<ref>) representing curves, Θ in (<ref>) and Ξ_θ in (<ref>), then the following two conditions are equivalenti) Ξ_sζ=0 for some s∈Θ(Δ_0,Σ_0);ii) ζ^∗(Δ_0⊗ U+Σ_0⊗ V)ζ≥0 for all U,V∈ℋ_n,V≥0 Define ζ=[f^∗ g^∗]^∗. Through some algebraic manipulations, we getζ^∗(Δ_0⊗ U+Σ_0⊗ V)ζ= α f^∗Vf+β e^-jφg^∗Vf+β e^jφf^∗Vg+ γ g^∗Vg+e^-jφg^∗Uf+e^jφf^∗Ug = tr[ (α ff^∗+β e^-jφfg^∗+β e^jφgf^∗+γ gg^∗)V ]+tr[ (e^-jφfg^∗+e^jφgf^∗)U ] Suppose i) holds.It can be verified that i) holds if and only if either a) Θ(Δ_0,Σ_0) is bounded and f=(jω)^νg holds for some ω∈ℝ^+ such that ρ(θ,Δ_0)≥0 or b) Θ(Δ_0,Σ_0) is unbounded (α≥0) and g=0.If f=(jω)^νg, thenζ^∗(Δ_0⊗ U+Σ_0⊗ V)ζ =(αω^2ν+γ)g^∗Vg Because ρ(θ,Δ_0)≥0 and V≥0, we get ζ^∗(Δ_0⊗ U+Σ_0⊗ V)ζ≥0.If α≥0 and g=0, it's obvious that ζ^∗(Δ_0⊗ U+Σ_0⊗ V)ζ≥0 for all U,V∈ℋ_n,V≥0.Suppose ii) is satisfied. It implies thatα ff^∗+β e^-jφfg^∗+β e^jφgf^∗+γ gg^∗≥0e^-jφfg^∗+e^jφgf^∗=0both hold.According to Lemma <ref>, equation (<ref>) implies that either f=(jω)^νg, g≠0or g=0 holds. f=(jω)^νg can further derive i) when Θ(Δ_0,Σ_0) is bounded.g=0 can further derive i) when Θ(Δ_0,Σ_0) is unbounded. This ends the proof.Let N∈ ℂ ^2n×(n+m) and Δ,Σ∈ℋ_2 be given such that Θ in (<ref>) represents curves. Define Θ and Ξ_θ by (<ref> ) and (<ref>), respectively. Then, the set 𝒢_1 defined in (<ref>) can be characterized by (<ref>) withℱ≜{ N^∗(Δ⊗ U+Σ⊗ V)N:U,V∈ℋ_n,V≥0 }Let 𝒢_2 be defined to be 𝒢_1 in (<ref>) with (<ref>) and N_0≜(T⊗ I)N. Then, for a nonzero vector ηηη^∗∈𝒢_1 ⇔ Ξ_θNη=0 for some θ∈Θ(Δ,Σ) ⇔ Ξ_sN_0η=0 for some s∈Θ(Δ_0,Σ_0) ⇔ η^∗N_0^∗(Δ_0⊗ U+Σ_0⊗ V)N_0η≥0for all U,V∈ℋ_n,V≥0 ⇔ ηη^∗∈𝒢_2where the first and fourth equivalences can easily be gotten from the definitions, and the second equivalence holds due to Lemma <ref>, and the third equivalence holds due to the Lemma <ref>, respectively. Now we can get the rank-one separable set ℱ.Let N∈ ℂ ^2n×(n+m) and Δ,Σ∈ℋ_2 be given such that Θ in (<ref>) represents curves. Define Θ by (<ref>), the set ℱ by (<ref>) and the matrix Ξ_θ by (<ref>). Then the set ℱ is admissible and rank-one separable.Clearly, ℱ is a closed convex cone. When F∈ℱ>0, the set 𝒢_1 is nonempty and hence ℱ is admissible (<cit.>, Lemma 11). From Lemma <ref>, we getΔ⊗ U+Σ⊗ V=(T⊗ I)^∗[ [ α V Ue^jφ+β e^jφV; Ue^-jφ+β e^-jφV γ V ]](T⊗ I)where α≤γ and γ≥0 according to Proposition <ref>.When α<0<γ, defineW≜[ [ √(-α)Ie^-jφ/2 0; 0 √(γ)Ie^jφ/2 ]](T⊗ I)N X≜(U+β V)/√(-αγ),Y≜ V Then, the set ℱ can be characterized as ℱ=W^∗ℱ_XYW with ℱ_XY defined in (<ref>).When γ≥α≥0, defineK≜[ [ e^-jφ/2 0; 0e^jφ/2 ]](T⊗ I)N X≜ U+β V,P≜((T⊗ I)N)^∗[ [ α V 0; 0 γ V ]](T⊗ I)N Then, we get ℱ=K^∗ℱ_XK+𝒫 with ℱ_X defined in (<ref>), and the set 𝒫≜{P} is obviously a subset of positive-semidefinite matrices containing the origin.Since ℱ_X and ℱ_XY are rank-one separable, it can be verified that ℱ is rank-one separable according to Lemma <ref>. Now, we are ready to state and prove the theorem for finite frequency FOS. Let matrices Π∈ℋ_n+m, N∈ ℂ ^2n×(n+m), and Δ,Σ∈ℋ_2 be given and Θ and Θ is defined by (<ref>) and (<ref>), respectively. Suppose Θ represents curves on the right half complex plane and Θ represents Θ∪{∞}. Ξ_θ is defined in (<ref>) and S_θ is defined as S_θ≜(Ξ_θN)_⊥. The following statements are equivalenti) S_θ^∗Π S_θ<0, ∀θ∈Θ(Δ,Σ).ii) There exist U,V∈ℋ_n such that V>0 andN^∗(Δ⊗ U+Σ⊗ V)N+Π<0Note that i) holds if and only if tr(Π𝒢_1)<0 holds where 𝒢_1 is defined in (<ref>). The set 𝒢_1 can be characterized by (<ref>) with ℱ in (<ref>) according to Lemma <ref>. By Lemma <ref>, the set ℱ is admissible and rank-one separable, which means i) is equivalent to ℱ+Π<0 according to Lemma <ref>, i.e. ii) holds. Because the inequality in (<ref>) is strict, the existence of V can be chosen as V>0 without loss of generality.§.§ L_∞ with Different Frequency Range The following gives the result of L_∞ with finite frequency. For a matrix function T(s), the L_∞ norm of T(s) is defined as‖ T(s) ‖_L_∞≜ω∈ℝsupσ_max(T(jω))where σ_max is the maximum singular value.<cit.> For a matrix function T(s), there holds‖ T(s) ‖_L_∞=ω≥0supσ_max(T(jω))Consider FOS with its transfer function G(s) in (<ref>). Given a prescribed L_∞ performance bound δ>0, then ‖ G(s)‖ _L_∞=ωsup σ_max(G(jω)<δ, ω belong to the principal Riemann surface and ω∈Ω_L≜{ω∈ ℝ^+:ω≤ω_L}, holds if and only if there exist U,V∈ℋ_n,V>0, such that[ [ Sym(X)-A^TVA+ω_L^2νVY^∗C^T;Y -δ I-B^TVBD^T;CD -δ I ]]<0where X≜ e^jφA^TU, Y≜-B^TVA+e^jφB^TU,σ_max is the maximum singular value.Let Δ=[ [ 0e^jφ; e^-jφ 0 ]], Σ=[ [ -10;0 ω_L^2ν ]], and then it can readily be verified that Θ(Δ,Σ) can represent a curve on the complex plane with the frequency range Ω_L. Let θ(ω)≜ e^jπ/2νω^ν, then G(jω)=C (θ(ω)I-A)^-1B+D.By some basic matrix calculations, we getωsupσ_max(G(jω))<δ ⇔G^∗(jω)G(jω)-δ^2 I<0,∀ω∈Ω_L ⇔ [ [ H(θ);I ]]^∗Π[ [ H(θ);I ]]<0,∀θ∈Θ (Δ,Σ)where H(θ)≜(θ I-A)^-1B andΠ≜[ [C^TCC^TD;D^TC D^TD-δ^2I ]] According to the theorem <ref>, the last part of (<ref>) is also equivalent to the following LMI.[ A B; I 0 ]^T[-V e^jφU;e^-jφU ω_L^2νV ][ A B; I 0 ]+Π<0 This LMI can be simplified as[ Sym(X)-A^TVA+ω_L^2νVY^∗;Y -δ^2 I-B^TVB ] + [ C D ]^T[ C D ]<0 where X≜ e^jφA^TU, Y≜-B^TVA+e^jφB^TU.Rescaling U,V and utilizing the Schur complement theorem, (<ref>) is finally achieved.Consider FOS with its transfer function G(s) in (<ref>). Given a prescribed L_∞ performance bound δ>0, then ‖ G(s)‖ _L_∞=ωsup σ_max(G(jω)<δ, ωbelong to the principal Riemann surface and ω∈Ω_H≜{ω∈ ℝ^+:ω≥ω_H}, holds if and only if there exist U,V∈ℋ_n,V>0, such that[ [ Sym(X)+A^TVA-ω_H^2νVY^∗C^T;Y -δ I+B^TVBD^T;CD -δ I ]]<0where X≜ e^jφA^TU, Y≜ B^TVA+e^jφB^TU,σ_max is the maximum singular value. Consider FOS with its transfer function G(s) in (<ref>). Given a prescribed L_∞ performance bound δ>0, then ‖ G(s)‖ _L_∞=ωsup σ_max(G(jω)<δ , ω∈Ω_M≜{ω∈ ℝ^+:ω_1≤ω≤ω_2}, holds if and only if there exist U,V∈ℋ_n,V>0, such that[ [ Sym(X)-A^TVA-ω_1^νω_2^νVY^∗C^T;Y -δ I-B^TVBD^T;CD -δ I ]]<0where X≜ A^T(e^jφU+ω_cV), Y≜-B^TVA+B^T(e^jφU+ω_cV), ω_c=j^νω^ν_1+ω^ν_22, σ_max is the maximum singular value. Consider FOS with its transfer function G(s) in (<ref>). Given a prescribed L_∞ performance bound δ>0, then ‖ G(s)‖ _L_∞=ωsup σ_max(G(jω)<δ ,ωbelong to the principal Riemann surface and ω∈Ω_I≜ℝ^+∪{+∞}, holds if and only if there exist U,V∈ℋ_n,V>0, such that[ [ Sym(X)Y^∗C^T;Y -δ ID^T;CD -δ I ]]<0where X≜ e^jφA^TU, Y≜ e^jφB^TU,σ_max is the maximum singular value. The theorem of high frequency and middle frequency can be proved similarly to the proof of low frequency. The curve Θ(Δ,Σ) in high frequency is chosen asΔ=[ [ 0e^jφ; e^-jφ 0 ]],Σ=[ [ 1 0; 0 -ω_H^2ν ]] The curve Θ(Δ,Σ) in middle frequency is chosen asΔ=[ [ 0e^jφ; e^-jφ 0 ]],Σ=[ [ -1j^νω_1^ν+ω_2^ν2; (-j)^νω_1^ν+ω_2^ν2-ω_1^νω_2^ν ]] The curve Θ(Δ,Σ) for infinite frequency is chosen asΔ=[ [ 0e^jφ; e^-jφ 0 ]],Σ=0_2 This ends the proof.For the infinite frequency range, when the fractional order ν=1, the condition is as the same as the KYP<cit.>. Meanwhile, Liang<cit.> proves a theorem of L_∞ for infinite frequency, but he utilizes the GKYP lemma directly. The results are different because he chooses the Σ asΣ=[ 0 1-α; 1-α 0 ]but the two theorems are equivalent. § FGKYP FOR H_∞ NORM OF FOS In this section, we check the H_∞ norm of FOS.For a matrix function T(s), the H_∞ norm of T(s) is defined as‖ T(s) ‖_H_∞≜Re(s)≥0supσ_max(T(s))where σ_max is the maximum singular value. When we want to check ‖ G(s) ‖_H_∞ < δ where G(s) is a transfer function in (<ref>), we get the following LMI[ [ (s^ν-A)^-1B; I ]]^∗Π[ [ (s^ν-A)^-1B; I ]]<0where s∈ℂ,Re(s)≥0. Now we use the S-Procedure to check this LMI condition. A convex region described by two straight lines on the complex plane is defined as Θ(Δ,Σ). Δ,Σ∈ℋ_2 and Δ≜[ 0 α; α 0 ], Σ≜[ 0 β; β 0 ].Θ(Δ,Σ)≜{θ∈ℂ:[ θ; 1 ]^∗Δ[ θ; 1 ]≥0,[ θ; 1 ]^∗Σ[ θ; 1 ]≥0 }Define set 𝒢_1 in (<ref>) as𝒢_1={ηη^∗:η=[ [ (s^νI-A)^-1B;I ]]ζ, [ ζ∈ ℂ ^m,ζ≠0; s ∈ ℂ∪{∞},Re(s)≥0 ]}Then𝒢_1={ηη^∗:η∈ℳ_θ, θ∈Θ} ℳ_θ ≜{η∈ ℂ ^n+m:η≠0,Ξ_θNη=0}whereΞ_θ≜{[ [I_n -θ I_n](θ∈ ℂ ); [0 -I_n](θ=∞) ]. N≜[ [ A B; I 0 ]] Let s belong to the principal Riemann surface, i.e. { s| -π<arg(s)<π}, only on which the roots of (s^νI-A)=0 decide the time-domain behavior and stability of fractional system <cit.>. Now, we need to find a rank-one separable set ℱ, which can satisfy (<ref>). §.§ For fractional order 0< ν≤ 1<cit.> Let f,g∈ℂ^n and g≠0. Thenfg^*+gf^*≥0⇔ f=θ g for some θ∈ℂ with Re(θ)≥0 Let Θ(Δ,Σ) be defined by (<ref>), Ξ_θ by (<ref>) and ζ is a given vector. If the region Θ(Δ,Σ) defined by (<ref>) represents the region Ω={s^ν| s∈ℂ, Re(s)≥0,0<ν≤1} on the complex plane and s belongs to principal Riemann surface, then the following statements are equivalent.i) Ξ_θζ=0 for some θ∈Θ with Re(θ)≥0;ii) ζ^*[(Δ+Σ)⊗ U]ζ≥0 for all U∈ℋ_n,U>0.Let Δ=[0 a+jc; a-jc0 ] and Σ=[0 b+jd; b-jd0 ], (α=a+jc, β=b+jd), and a,b,c,d∈ℝ. If Θ represent the region Ω and θ=x+jy∈Θ,x,y∈ℝ, then Θ={θ=x+jy|sin(π/2ν)x+ycos(π/2ν)≥0,sin(π/2ν)x-cos(π/2ν)y≥0 }i.e. a=b=sin(π/2ν)>0,c=-d=cos(π/2ν).Therefore, when Θ represent the region Ω, there holds α+β=2sin(π/2ν)>0.Because s belongs to principal Riemann surface, the set Ω implies that Re(s^ν)≥0. Therefore, θ∈Θ(Δ,Σ) implies that Re(θ)≥0.Let ζ=[ f; g ]. Note that i) satisfies if and only if either f=θ g (θ∈ℂ) or g=0 (θ=∞).For statement ii), by some basic algebraic calculation, we getζ^*[(Δ+Σ)⊗ U]ζ= (α+β)(g^*Uf+f^*Ug) = (α+β)tr[(fg^*+gf^*)U]≥0 Note that α+β>0. Inequality (<ref>) holds for all U∈ℋ_n,U>0, which implies thatfg^*+gf^*≥0 According to lemma <ref>, statement i) is equivalent to statement ii) (g≠0).When g=0, it's obvious that statement i) is equivalent to statement ii). This ends the proof. Now, we are ready to state FGKYP for H_∞ norm. Let matrices Π∈ℋ_n+m, N∈ℂ^2n×(n+m) be given. Ξ_θ is defined in (<ref>) and S_θ is defined as S_θ≜(Ξ_θN)_⊥. If the region Θ(Δ,Σ) defined by (<ref>) represents the region Ω={s^ν| s∈ℂ, Re(s)≥0,0<ν≤1} on the complex plane and s belongs to principal Riemann surface, then the following statements are equivalenti) S_θ^∗Π S_θ<0, ∀θ∈Θ;ii) There exists U∈ℋ_n such that U>0 andN^∗[(Δ+Σ) ⊗ U]N+Π<0 Define set ℱℱ≜{N^∗[(Δ+Σ) ⊗ U]N:U∈ℋ_n,U>0} Let 𝒢_1 defined by (<ref>), and 𝒢_2 be defined to be 𝒢_1 in (<ref>) with (<ref>). Then, for a nonzero vector ηηη^∗∈𝒢_1 ⇔ Ξ_θNη=0 for some θ∈Θ(Δ,Σ) ⇔ η^* N^*[(Δ+Σ)⊗ U]Nη≥0for all U∈ℋ_n,U>0 ⇔ ηη^∗∈𝒢_2where the first and third equivalences easily follow from the definitions, and the second equivalence holds due to Lemma <ref>.The last step is to prove the set ℱ is rank-one separable. Note that, if the region Θ(Δ,Σ) represents the region Ω, there holds α+β>0. We get(Δ+Σ) ⊗ U=[0 (α+β)U; (α+β)U0 ] Thus (Δ+Σ) ⊗ U is rank-one separable according to set ℱ_X in (<ref>). Therefore, according to Lemma <ref>, the set ℱ is rank-one separable. Finally, i) is equivalent to ii) due to Lemma <ref>. This ends the proof. Now, we can check the H_∞ norm by LMI. Consider the FOS with fractional order 0<ν≤1 and its transfer function G(s) in (<ref>). Given a prescribed H_∞ performance bound δ>0, then ‖ G(s)‖_H_∞<δ holds if and only if there exists U∈ℋ_n,U>0 such that the following LMI holds[ Sym(A^TU)sin(π/2ν)UBsin(π/2ν)C^T;B^TUsin(π/2ν) -δ ID^T;CD -δ I ]<0 Let the region Θ(Δ,Σ) defined by (<ref>) represent the region Ω={s^ν| s∈ℂ, Re(s)≥0,0<ν≤1} on the complex plane, where Δ=[ 0 α; α 0 ] and Σ=[ 0 β; β 0 ]. Then there holds α+β=2sin(π/2ν).Let K_θ=(θ I-A)^-1B. By some basic matrix calculations, we have‖ G(s)‖_H_∞<δ ⇔G^*(s)G(s)-δ^2 I< 0, Re(s)≥0 ⇔ [ K_θ; I ]^*Π[ K_θ; I ],∀θ∈Θwhere Π=[ C^TC C^TD; D^TC D^TD-δ^2 I ]According to Theorem <ref>, there exists a matrix U∈ℋ_n,U>0, such that[ A B; I 0 ]^T[0 (α+β)U; (α+β)U0 ][ A B; I 0 ]+Π<0 The above LMI can be further simplified as[ 2Sym(A^TU)sin(π/2ν)2UBsin(π/2ν);2B^TUsin(π/2ν)-δ^2 I ]+[ C D ]^T[ C D ]<0 Rescaling U and utilizing the Schur complement theorem, we finally get (<ref>). This ends the proof.When ν=1, the condition is as same as the condition of H_∞ for integer order system<cit.>. Meanwhile, Liang<cit.> gives a sufficient condition of H_∞ with fractional order 0<ν<1. And if the unknown matrix U in Theorem <ref> is an arbitrary matrix, the theorems between Liang's and ours are equivalent. §.§ For fractional order 1<ν<2The following gives theorems of H_∞ norm for FOS with fractional order 1<ν<2. <cit.> Let F,G be complex matrices of the same size. ThenFG^∗+GF^∗≥0if and only if there exists a matrix U such that UU^∗≤ I and F(I+U)=G(I-U). Thus, we get the following lemma.Let f,g∈ℂ^n and g≠0. Thenfg^*+gf^*≥0⇔ f=jθ g for some θ∈ℂ with Im(θ)≤0 Application of Lemma <ref> with (1-U)/(1+U)=jθ gives the desired result.Let Θ(Δ,Σ) be defined by (<ref>), Ξ_θ by (<ref>) and ζ is a given vector. If the region Θ(Δ,Σ) defined by (<ref>) represents the region Ω on the complex plane. Ω and Ω are symmetrical with respect to the real axis on the complex plane and satisfy Ω∪Ω={s^ν| s∈ℂ, Re(s)≥0,1<ν<2} and Ω∩Ω={s| s∈ℂ, Im(s)=0 } and s belongs to principal Riemann surface, then the following statements are equivalent.i) Ξ_θζ=0 for some θ∈Θ with Im(θ)≤0;ii) ζ^*{[T^*_0(Δ+Σ^T)T_0]⊗ U}ζ≥0 for all U∈ℋ_n,U>0, where T_0=[e^jπ/4 0; 0 e^-jπ/4 ]Let Δ=[0 a+jc; a-jc0 ] and Σ=[0 b+jd; b-jd0 ], (α=a+jc, β=b+jd), and a,b,c,d∈ℝ. Because s belongs to principal Riemann surface, the set Ω can be chosen so that Im(s^ν)≤0. Therefore, θ∈Θ(Δ,Σ) implies that Im(θ)≤0. If Θ represent the region Ω and θ=x+jy∈Θ,x,y∈ℝ, thenΘ={θ=x+jy|sin(π/2ν)x+cos(π/2ν)y≥0, cos(π/2ν)y≥0}i.e. a=sin(π/2ν)>0,b=0,c=d=cos(π/2ν).Therefore, when Θ represent the region Ω, there holds α+β=2sin(π/2ν)>0.Let ζ=[f; jg ]. Note that i) satisfies if and only if either f=jθ g (θ∈ℂ) or g=0 (θ=∞).For statement ii), by some basic algebraic calculation, we getζ^*{[T^*_0(Δ+Σ^T)T_0]⊗ U}ζ= (α+β)(g^*Uf+f^*Ug) = (α+β)tr[(fg^*+gf^*)U]≥0 Note that α+β>0. Inequality (<ref>) holds for all U>0, which implies thatfg^*+gf^*≥0 According to lemma <ref>, statement i) is equivalent to statement ii) (g≠0).When g=0, it's obvious that statement i) is equivalent to statement ii). This ends the proof.Let matrix Π∈ℋ_n+m be given. The region Θ(Δ,Σ) defined by (<ref>) represents the region Ω. Region Ω and Ω are symmetrical with respect to the real axis on the complex plane and satisfyΩ∪Ω={ s^ν| s∈ℂ,Re(s)≥0,1<ν<2 } and Ω∩Ω={s| s∈ℂ, Im(s)=0 }. Meanwhile, s belongs to the principal Riemann surface. Ξ_θ is defined by (<ref>), N by (<ref>) and S_θ is defined as S_θ≜(Ξ_θN)_⊥. Then, the following statements are equivalencei) S_θ^∗Π S_θ<0, ∀θ∈Θ(Δ,Σ);ii) There exists U∈ℋ_n such that U>0 andN^∗{[T^*_0(Δ+Σ^T)T_0]⊗ U}N+Π<0where T_0=[e^jπ/4 0; 0 e^-jπ/4 ].Define set ℱℱ≜{N^∗((T^*_0(Δ+Σ^T)T_0)⊗ U)N:U∈ℋ_n,U>0} Let 𝒢_1 defined by (<ref>), and 𝒢_2 be defined to be 𝒢_1 in (<ref>) with (<ref>). Then, for a nonzero vector ηηη^∗∈𝒢_1 ⇔ Ξ_θNη=0 for some θ∈Θ(Δ,Σ) ⇔ η^* N^*((T^*_0(Δ+Σ^T)T_0)⊗ U)Nη≥0for all U∈ℋ_n,U>0 ⇔ ηη^∗∈𝒢_2where the first and third equivalences easily follow from the definitions, and the second equivalence holds due to Lemma <ref>.The last step is to prove the set ℱ is rank-one separable.Note that (T^*_0(Δ+Σ^T)T_0)⊗ U=(T_0⊗ I)^*((Δ+Σ^T)⊗ U) (T_0⊗ I). Because the region Θ(Δ,Σ) represents the region Ω, there holds α+β=α+β>0. We get(Δ+Σ^T) ⊗ U=[0 (α+β)U; (α+β)U0 ] Thus (Δ+Σ^T) ⊗ U is rank-one separable according to set ℱ_X in (<ref>). Therefore, according to Lemma <ref>, the set ℱ is rank-one separable. Finally, i) is equivalent to ii) due to Lemma <ref>. This ends the proof. The following check the H_∞ of FOS with fractional order 1<ν<2. Consider the FOS with fractional order 1<ν<2 and its transfer function G(s) in (<ref>). Given a prescribed H_∞ performance bound δ>0, then ‖ G(s)‖_H_∞<δ holds if and only if there exists a matrix U∈ℋ_n,U>0 such that the following LMI holds[ Sym(jUA)sin(π/2ν)jUBsin(π/2ν) C^T; -jB^TUsin(π/2ν)-δ I D^T; C D-δ I ]<0 Let region Ω and Ω are symmetrical with respect to the real axis on the complex plane and satisfyΩ∪Ω={ s^ν| s∈ℂ,Re(s)≥0,1<ν<2 }and Ω∩Ω={s| s∈ℂ,Im(s)=0 }.It's a fact that there must hold‖ G(s) ‖_H_∞=Re(s)≥0supσ_max(G(s))=s∈Ωsupσ_max(G(s)). This just follows from the maximum modulus principle and the complex conjugate symmetry of G(s). The region Θ(Δ,Σ) defined by (<ref>) represents the region Ω, where Δ and ΣΔ=[ 0 α; α 0 ], Σ=[ 0 β; β 0 ]and α+β=2sin(π/2ν).Similarly to the proof of Theorem <ref> with α+β=2sin(π/2ν) and due to the Theorem <ref>, we can get the result.Liang<cit.> also proves a sufficient and necessary condition of H_∞ with fractional order 1<ν<2. The two conditions between Liang's and ours are equivalent because jUsin(π/2ν) can be regarded as an arbitrary complex matrix. § NUMERICAL EXAMPLESIn order to use the LMI tools of Matlab, the following fact should be introduced. A Hermitian matrix H<0 holds if and only if the following real LMI holds[ Re(H) Im(H); Im(H)^T Re(H) ]<0 The following shows the L_∞ of FOS with low frequency range.Consider the transfer function G(s) in (<ref>) with the parameters described as following. A = [ -12.1 2.3;2.37 -16.2 ], B=[-2; 1.2 ],C =[ 1.5 1.9 ], D=0.8, ν=0.6, δ=0.9, 0≤ω≤ 100 The maximum singular values are shown in figure <ref>. It shows that the L_∞ norm is less than 0.77 in the frequency range. Due to Theorem <ref>, solving the LMI (<ref>) via Matlab, we getU=[4.49087.3472;7.3472 13.5229 ], V=[ 0.0772 0.1525; 0.1525 0.4045 ] This implies that L_∞<0.9 is convinced. According to figure <ref>, L_∞<0.77<0.9, which means Theorem <ref> is correct. However, when we set δ=0.6, LMI (<ref>) cannot be solved because 0.6 is less than the max value shown in figure <ref>. It verifies that Theorem <ref> is correct.The following gives an example of the Theorem <ref>.Consider the transfer function G(s) in (<ref>) with the parameters described as following. A=[ -1.91.3;0.6 -1.5 ], B=[ -1.8;2.7 ],C=[ 2.2 3.1 ], D=0.2, ν =0.7, δ=9.2 The eigenvalues of this system are shown in figure <ref>. It shows that thesystem is stable.Solving the LMI (<ref>) via Matlab, we get U=[ 1.6211 0.8928; 0.8928 2.2315 ] This implies that H_∞<9.2 is convinced. However, when we set δ=1.6, the LMI (<ref>) cannot be solved, which means H_∞<1.6 is not verified.§ CONCLUSIONIn this paper, FGKYP is proved, which develops the GKYP into the fractional order system. S-procedure is used to bridge between the matrix inequality and frequency range. We prove the FGKYP for L_∞ and H_∞ of FOS, respectively. Based on the FGKYP, L_∞ of FOS with different frequency range is proved. H_∞ of FOS is proved and the FGKYP is different between fractional order 0<ν≤ 1 and 1<ν<2. Examples are given to verify the theorems. unsrt
http://arxiv.org/abs/1704.08425v1
{ "authors": [ "Xiaogang Zhu", "Junguo Lu" ], "categories": [ "math.OC", "93C05" ], "primary_category": "math.OC", "published": "20170427040049", "title": "Fractional Generalized KYP Lemma for Fractional Order System within Finite Frequency Range" }
[email protected] Heinz Maier-Leibnitz Zentrum (MLZ), Technische Universität München, 85747 Garching, Germany Department of Materials Science, Tohoku University, Sendai 980-8579, JapanHeinz Maier-Leibnitz Zentrum (MLZ), Technische Universität München, 85747 Garching, Germany Faculty of Physics and Center for Nanointegration, CENIDE, University of Duisburg-Essen, 47048 Duisburg, Germany Department of Materials Science, Tohoku University, Sendai 980-8579, JapanHeinz Maier-Leibnitz Zentrum (MLZ), Technische Universität München, 85747 Garching, GermanyFaculty of Physics and Center for Nanointegration, CENIDE, University of Duisburg-Essen, 47048 Duisburg, [email protected] Heinz Maier-Leibnitz Zentrum (MLZ), Technische Universität München, 85747 Garching, Germany The phase stabilities and ordering tendencies in the quaternary full-Heusler alloys NiCoMnAl and NiCoMnGa have been investigated by in-situ neutron diffraction, calorimetry and magnetization measurements. NiCoMnGa was found to adopt the L2_1 structure, with distinct Mn and Ga sublattices but a common Ni-Co sublattice. A second-order phase transition to the B2 phase with disorder also between Mn and Ga was observed at 1160K. In contrast, in NiCoMnAl slow cooling or low-temperature annealing treatments are required to induce incipient L2_1 ordering, otherwise the system displays only B2 order. Linked to this L2_1 ordering, a drastic increase in the magnetic transition temperature was observed in NiCoMnAl, while annealing affected the magnetic behavior of NiCoMnGa only weakly due to the low degree of quenched-in disorder. First principles calculations were employed to study the thermodynamics as well as order-dependent electronic properties of both compounds. It was found that a near half-metallic pseudo-gap emerges in the minority spin channel only for the completely ordered Y structure, which however is energetically unstable compared to the predicted ground state of a tetragonal structure with alternating layers of Ni and Co. The experimental inaccessibility of the totally ordered structures is explained by kinetic limitations due to the low ordering energies. Ordering tendencies and electronic properties in quaternary Heusler derivatives Michael Leitner December 30, 2023 ===============================================================================§ INTRODUCTION §.§ Motivation and Scope The class of Heusler alloys, with the ternary system Cu_2MnAl as the prototypical representative,<cit.> hosts a variety of systems displaying intriguing properties.<cit.> For instance, the latent structural instability in the magnetic Ni_2Mn-based compounds gives rise to significant magnetic shape memory<cit.> and magnetocaloric<cit.> effects. On the other hand, they can also display attractive properties that are directly related to their electronic configuration, with the proposal of spintronics, which relies on the detection and manipulation of spin currents, as an example. In a magnetic tunnel junction for instance, the achievable tunneling magnetoresistive effect and thereby the miniaturization of components depends on the spin polarization of the conduction electrons in the electrodes<cit.>. As a consequence, half-metallic materials, which have a 100% spin polarization due to a band gap at the Fermi level in one spin channel, are highly sought after and currently the focus of both theoretical and experimental investigations.While the first half-metal identified by theoretical calculations in 1983 by Groot et al.<cit.> was the half-Heusler compound NiMnSb of C1_b structure, also in full-Heusler alloys with L2_1 structure half-metallic properties have been predicted<cit.> and experimentally observed<cit.>. Recently, also a large number of quaternary Heusler derivatives, among them NiCoMnAl<cit.> and NiCoMnGa<cit.>, have been suggested by ab initio calculations to be half-metals in their fully ordered Y structure <cit.>. Half-metallic properties in the Ni_2-xCo_xMnAl<cit.> andNi_2-xCo_xMnGa<cit.> systems have additionally been proposed for the Co-rich side of the respective phase diagrams on the basis of magnetization measurements via the Generalized Slater-Pauling rule. It is obvious that the degree of chemical order will have direct consequences for the half-metallic properties of these systems. However, the connection between atomic order, segregation and functional properties has also been established for the magnetocaloric and metamagnetic shape memory effects,<cit.>, ferroic glasses<cit.> and the recently reported shell-ferromagnetism in off-stoichiometric Heusler compounds.<cit.>In assessing the potential of a given material for application following from its electronic structure, theoretical and experimental investigations have contrasting characteristics: in ab initio calculations, the distribution of electronic charge is the fundamental quantity that is considered, which depends in principle only on the positions of the ions and their atomic numbers. From this, other important properties can be derived, like total energies, magnetic moments and forces on the ions. Different structures can be compared in terms of their total energies, but chemical disorder must be taken into account appropriately. This can be handled very efficiently in terms of the coherent potential approximation (see Ref. <cit.> for a recent review), which, however, does not provide an easy way to account for ionic relaxations. On the other hand, explicit calculations of disordered structures with randomly distributed atoms in larger super-cells are much more involved and numerically intensive. Thus, for practical reasons often an ordered configuration is assumed to be representative. On the other hand, in experiments the state of order in the sample is relevant for the potential application, while the determination of aspects of the electronic structure is often quite hard, which especially applies for the spin polarization. Thus, it seems indicated to combine the respective strengths of experiment and theory, which is what we set out to do in this paper. Specifically, in the systems of NiCoMnAl and NiCoMnGa we study the degrees of equilibrium long-range order and the associated order/disorder phase transitions by in situ neutron diffraction, and the kinetics of order relaxation during isothermal annealing by way of its effect on magnetization and Curie temperature. Further, we perform ab initio calculations on different ordered and disordered structures to determine the associated electronic structures as well as ordering energies. As we will show, these calculations imply that among the realistic candidates only the hitherto assumed Y ordering displays half-metallicity, but does not correspond to the actual ground state. In addition, the associated ordering energies are small, which explains the experimentally observed stability of disorder among Ni and Co.§.§ States of order in quaternary Heusler derivativesTo facilitate the discussion of the different ordered quaternary structures and their relations later in this article, we enumerate here the structures, define the nomenclature and summarize the pertinent knowledge on their ternary parent compounds. Heusler alloys in the strict sense of the word are ternary systems of composition X_2YZ displaying L2_1 order, which is defined by the space group 225 (Fm3̅m) with inequivalent occupations of the Wyckoff positions 4a, 4b and 8c. Typically, X is a late transition metal occupying preferentially 8c, while an early transition metal Y and a main-group metal Z occupy the other two sites,<cit.> with Cu_2MnAl the prototypical representative. Ni_2MnGa conforms to above definition and displays a stable L2_1 phase at intermediate temperatures.[We neglect here the martensitic transitions below room temperature.] Around 1053K it shows a second-order disordering transition to the B2 (CsCl) structure,<cit.> corresponding to a mixing of Mn and Ga, that is, it acquires space group 221 (Pm3̅m) with Wyckoff position 1a occupied preferentially by Ni, while Mn and Ga share position 1b. This partial disordering can be understood by the observation that B2 order, i.e., the distinction between Ni on the one hand and Mn and Ga on the other hand, is stabilized by nearest-neighbor interactions on the common bcc lattice, while the ordering between Mn and Ga corresponding to full L2_1 order can only be effected by the presumably weaker next-nearest-neighbor interactions. Indeed, in Ni_2MnAl only the B2 state or at the most very weak L2_1 order can experimentally be observed.<cit.> In both systems the B2 state is stable up to the melting point, that is, there is no transition to the fully disordered bcc state. In the Co-based systems, the situation is remarkably similar, with well-developed L2_1 order in Co_2MnGa and only B2 order in Co_2MnAl.<cit.>It seems probable, and is indeed corroborated by our experimental observations to be reported below, that the behavior of the quaternary systems NiCoMnGa and NiCoMnAl can be traced back to their ternary parent compounds. The most plausible candidates of ordered structures following this reasoning are illustrated Fig. <ref>. Given that both Ni- and Co-based ternary parents display the B2 structure at high temperatures, it is natural to assume this to be also the case for NiCoMnZ, with site 1a shared by Ni and Co and site 1b by Mn and Z. We will denote this as (NiCo)(MnZ), where the parentheses denote mixing between the enclosed elements.[We do not consider the other two B2 possibilities (NiMn)(CoZ) and (NiZ)(CoMn).]As temperature is decreased, transitions to states of higher order can appear. For the Mn-Z sublattice, a NaCl-like ordering of Mn and Z is most likely by analogy with the ternary parents. Assuming the same kind of interaction favoring unlike pairs also between Ni and Co, the realized structures depend on the relative strengths: for dominating Mn-Z interactions, the B2 phase would transform to an L2_1 structure of type (NiCo)MnZ, where Ni and Co are randomly arranged over the 8c sites, and in the converse case to L2_1 NiCo(MnZ) with Mn and Z on 8c. In either case, the ordering of the other sublattice at some lower temperature would transform the system to the so-called Y structure<cit.> of prototype LiMgPdSn<cit.> with space group 216 (F4̅3m) and Wyckoff positions 4a, 4b, 4c, and 4d being occupied by Ni, Co, Mn and Z, respectively. However, as the kind of chemical interaction within the Ni-Co sublattice is as yet unknown, also other possibilities have to be considered. In principle, there is an unlimited number of superstructures on the L2_1 (NiCo)MnZ structure, corresponding to different Ni/Co orderings. In particular, apart from the above-mentioned cubic Y structure (with NaCl-type Ni/Co ordering) there are two other structures with a four-atom primitive cell, making them appear a priori equally likely to be realized as the Y structure. These are tetragonal structures characterized by either alternating columns or planes of Ni and Co atoms, which we denote by and . Specifically, the structure has space group 131 (P4_2/mmc), with Ni on Wyckoff position 2e, Co on 2f, Mn 2c, and Z on 2d, while has space group 129 (P4/nmm) with Ni on 2a and Co on 2b, while Mn and Z reside on two inequivalent 2c positions, with prototype ZrCuSiAs<cit.>. Note that the Ni/Co ordering in these three fully-ordered structures can equally be understood as alternating planes in different crystallographic orientations, with corresponding to (1,0,0) planes, to (1,1,0), and Y to (1,1,1) planes. Finally, of course the possibility of phase separation into L2_1 Ni_2MnZ and Co_2MnZ has to be considered.§ MACROSCOPIC PROPERTIES §.§ Sample preparation and thermal treatmentsNominally stoichiometric NiCoMnAl and NiCoMnGa alloys have been prepared by induction melting and tilt casting of high-purity elements under argon atmosphere. After casting, the samples have been subjected to a solution-annealing treatment at 1273K followed by quenching in room-temperature water. In this state, the samples have been checked for their actual composition using wavelength-dispersive X-ray spectroscopy (WDS). For each alloy, eight independent positions have been measured. The average over the retrieved values are given in Table <ref>, showing satisfactory agreement with the nominal compositions. Additionally, sample homogeneity was confirmed by microstructure observation using backscattered electrons. In order to track the ordering processes of the alloys upon low-temperatures isothermal aging, samples have been annealed at 623K for different times and water-quenched. Thus, for both systems we consider four states, corresponding to the as-quenched state and after annealings for 6h, 24h, and 72h, respectively, denoted in the following by aq, ann6h, ann24h, and ann72h. Previous results <cit.> have proven this low-temperature annealing protocol to be successful for increasing the achievable state of order in structurally similar alloys of the Ni_2MnAl system. §.§ Magnetization measurementsMagnetization measurements corresponding to the different annealing conditions were performed, specifically the Curie temperatures T_C and spontaneous magnetization values M_S have been determined. Temperature-dependent magnetization measurements were done in a TOEI Vibrating Sample Magnetometer (VSM) applying an external magnetic field of 500Oe in a temperature range from room temperature to 693K. The spontaneous magnetization for NiCoMnGa has been determined with a Superconducting Quantum Interference Device (SQUID) based Quantum Design MPMS system at 6K employing external magnetic fields up to 7T. Since for the ductile NiCoMnAl alloy sample preparation turned out to have an effect on sample properties, presumably due to introduced mechanical stresses, in this alloy system temperature-dependent magnetization measurements have been performed by VSM on samples of larger size in an external field of 1.5T.Figure <ref> a) and b) shows field-dependent magnetization curves (M(H)) of NiCoMnGa in four different annealing conditions measured at 293K and 6K. The spontaneous magnetization M_S has been retrieved via constructing Arrott plots. The obtained values are given in Table <ref>. While, as expected, the spontaneous magnetization increases with decreasing measurement temperature, no apparent effects on M_S due to annealing are visible, with the values of M_S scattering around 4.15 and 4.50 at 293K and 6K, respectively. These values show reasonable agreement to previous studies with M_S being stated as 4.67 at a temperature of 5K.<cit.> Figure <ref> c) and d) show M(H) curves of NiCoMnAl in four different annealing conditions measured at 293K and 77K. Additionally, M(T) curves of NiCoMnAl in the four annealing conditions at an external magnetic field of 15kOe that were used to extrapolate the 0K value of the spontaneous magnetization M_S are given in the Supporting Information. The determined values for M_S are listed in Table <ref> and range from 4.68 in the aq state to 4.89 in the ann72h state. This increase of 0.22 during annealing in NiCoMnAl is significantly larger than the corresponding effect in NiCoMnGa. Clearly, the difference in M_S with annealing further increases at higher measurement temperatures due to lower magnetic transition temperatures in the shorter annealed samples. The values obtained for M_S are in good agreement with two previous studies where M_S in the B2 ordered state was reported as 4.66<cit.> and 4.90<cit.>Figure <ref> shows the corresponding temperature-dependent magnetization measurements (M(T)) of NiCoMnGa and NiCoMnAl under an external field of 500Oe. The magnetization curves are normalized since absolute magnetization values are, due to sample shape-specific demagnetization fields, not meaningful under low external magnetic fields. In the following discussion, we define the apparent Curie temperature as the locus of the maximal slope of the M(T) curves.In NiCoMnAl, the magnetic transition temperature increases from 572.1K to 602.9K with annealing of the samples, reflecting a corresponding increase of L2_1 order. The specific transition temperatures are given in Table <ref>. This compares satisfactorily with the value of 570K quoted by Okubo et al.<cit.> for samples quenched from the B2 region. An important point to note is that, in order to probe the high Curie temperatures in these systems, during the measurements the sample is subjected to temperatures where the ordering kinetics become appreciable. Specifically, with ordering kinetics at 623K on the order of hours, the Curie temperatures below 600K measured on heating at a rate of 2K/min can safely be assumed to correspond to the degree of order imposed by the isothermal annealing treatments. However, at the maximum temperature of 683K the degree of order will relax during the measurement towards the corresponding equilibrium value, leading to an increase of order for the aq sample and a decrease when starting from a high degree of order. This difference between heating and cooling curves is well discernible.NiCoMnGa shows a magnetic transition from the ferromagnetic to the paramagnetic state between 636.0 and 638.2K. Since the degree of L2_1 order in NiCoMnGa is high in all annealing conditions, annealing has a much smaller effect on T_C than in NiCoMnAl. The determined transition temperatures are given in Table <ref>. Apparently, still a small increase of order with annealing exists in this alloy system. We interpret the constant offset of about 3.5K between heating and cooling to effects of thermal inertia. §.§ Differential scanning calorimetryDifferential Scanning Calorimetry (DSC) has been employed to analyze both alloys with respect to magnetic and structural phase transitions on a Netzsch DSC 404 C Pegasus. All measurements have been performed at a heating rate of 10K/min over a temperature range from 300 K to 1273 K. Figure <ref> shows DSC results for the NiCoMnGa and NiCoMnAl alloys that have been subject to solution annealing at 1273K, quenching to room temperature, followed by a low-temperature annealingat 623K applied with the intention to adjust a large degree of L2_1 order. Taking into account different ordering kinetics, the NiCoMnGa alloy was annealed for 24hK, while the NiCoMnAl alloy was annealed for 72h.Both alloys show a clear magnetic transition from the ferromagnetic to the paramagnetic state at 592K and 635K for NiCoMnAl and NiCoMnGa, respectively. Those values show good agreement to the values obtained by magnetization measurements (Table <ref>), justifying our approach of defining the Curie temperatures via the position of maximal slope in the magnetization under constant field. NiCoMnGa shows additionally an order-disorder phase transition at higher temperatures that can be assigned according to our neutron diffraction measurements (Sec. <ref>) to the transition from the L2_1-(NiCo)MnGa to the B2-(NiCo)(MnGa) structure, which is in accordance with the behavior of the structurally similar Ni_2MnGa compound<cit.> and with previous results from Kanomata et al.<cit.> The phase transition temperature was determined as 1151K, a value in excellent agreement to the 1152K reported in Ref. Kanomata2009. In contrast, NiCoMnAl does not show any further apparent peaks in the calorimetric signal besides the magnetic transition. § NEUTRON DIFFRACTIONNeutron diffraction measurements have been performed at the SPODI<cit.> high-resolution neutron powder diffractometer at the Heinz Maier-Leibnitz Zentrum (MLZ) in Garching, Germany. Polycrystalline samples were measured continuously on heating and cooling between room temperature and 1273K, employing rates of approximately 2K/min and a recording frequency of approximately one pattern per 15 minutes. Measurements have been done using Nb sample holders and employing a neutron wavelength of 1.5406Å. Temperature-dependent lattice constants, peak widths and structure factors corresponding to the different degrees of long-range order have been refined. Additionally, for the depiction of the waterfall plots, data treatment as described in Ref. Hoelzel2012 has been applied. Figure <ref> shows waterfall plots of the neutron diffraction patterns of NiCoMnGa/Al upon heating and cooling on a logarithmic pseudocolor scale. All reflection families, namely L2_1, B2 and A2, as well as the peaks due to the Nb sample holder, are labeled in the figure. Their presence correspond to the symmetry breaking into inequivalent sublattices as discussed in Sec. <ref>, and their strength indicates the quantitative degree of long-range order. The A2 peaks are not influenced by any disorder in the system, since here all lattice sites contribute in phase. The presence of the B2 peak family indicates different average scattering lengths on the Ni-Co and the Mn-Z sublattices. Finally, L2_1 peaks are due to a further symmetry breaking between either the 4a and 4b and/or 4c and 4d sublattices. Note that such a qualitative reasoning cannot distinguish whether the system has the Y structure or one of the two possible L2_1 structures, which can only be decided by a quantitative analysis (as will be done below). Similar as for the A2 peak family, the intensity of the B2 peak family is not influenced by the degree of L2_1 order.In the waterfall depiction, the evolution of peak position (in qualitative terms) peak intensity with temperature can be followed nicely. Initially, the samples correspond to the state quenched from 1273K. Already in this state, NiCoMnGa exhibits L2_1 order as evidenced by the presence of the corresponding diffraction peaks. Upon heating, first of all the thermal expansion of the lattice is observed with the peak positions shifting to smaller scattering angles. Simultaneously, the peaks stemming from the Nb sample holder can clearly be distinguished from the sample peaks due to their lower rate of thermal expansion. At approximately 1160K, a disordering phase transition from the L2_1 phase to the B2 phase is observed. This is reversed on cooling at nearly the same temperature, which shows that at these high temperatures the equilibrium states of order are followed closely. The observed value of 1160K is in good agreement to the 1151K determined by calorimetry.In contrast to NiCoMnGa, NiCoMnAl is found to have a B2 state in the as-quenched condition with no L2_1 reflections visible. On cooling, the peaks are slightly narrower than on heating, indicating the release of internal stresses in the sample remaining from quenching. Interestingly, upon slow cooling the sample down from 1273K, at approximately 850K very diffuse maxima are appearing at the positions where L2_1 reflections would be expected. Numerical analysis of the corresponding regions on heating suggests that also already here a very weak intensity is found as soon as temperature regions are reached that are sufficient to facilitate a relaxation of order via diffusion. das muss ich definitiv noch machen, weil nach hinten das ja als wohldefinierter übergang dargestellt wird, sonst würde das zu beliebig aussehen. Arguably, the diffuse intensity observed is the manifestation of L2_1 short-range order or incipient L2_1 long-range order with very small anti-phase domains. Such anti-phase domains have previously been observed in Ni_2MnAl_0.5Ga_0.5 alloys,<cit.> where the phase transition temperature implies ordering kinetics on experimentally accessible time scales. The absence of well-defined L2_1 order as well as the pronouncedly lower B2-L2_1 transition temperature in NiCoMnAl compared to the NiCoMnGa alloy is consistent with the behavior observed in the related Ni_2MnAl and Ni_2MnGa compounds where transition temperatures of, respectively, 775K<cit.> and 1053K<cit.> have been reported.While confirming a state of B2 order, in neutron diffraction no magnetic superstructure peaks are observed. Thus, in contrast to Ni_2MnAl, where Ziebeck et al. <cit.> discovered a helical magnetic structure manifesting itself in form of antiferromagnetic superstructure reflections and satellite peaks at the (200) and (220) reflections, NiCoMnAl is entirely ferromagnetic even under B2 order.This goes along with M(H) measurements (Sec. <ref>) showing prototypical ferromagnetic properties. In contrast, for Ni_2MnAl, antiferromagnetic properties haven been reported.<cit.> Presumably, the drastic difference in magnetic structure results from strong ferromagnetic interactions in the system introduced by Co, overcoming the antiparallel coupling between neighboring Mn atoms.Figure <ref> shows the temperature-dependent lattice constants retrieved from fitting the in-situ neutron diffraction data as well as the corresponding temperature-dependent thermal expansion coefficients. At 340K, the lattice constant of approximately 5.81Å in as quenched NiCoMnGa is only slightly larger than the one of NiCoMnAl with approximately 5.80Å, while the thermal expansion is similar in both alloys with a value of approximately 2×10^-5K^-1. In the case of NiCoMnGa, the heating and cooling curves coincide, indicating little effect of the applied quenching treatment. The B2-L2_1 transition is clearly mirrored in the lattice constant, with a maximum in the thermal expansion coefficient around 1160K, in agreement with the calorimetric transition temperature and the vanishing of L2_1 intensities in neutron diffraction. In the case of NiCoMnAl, the B2–L2_1 ordering transition is neither visible directly in the lattice constant nor in the thermal expansion coefficient. However, this system displays another striking effect with the divergence of the lattice constants on heating and cooling at intermediate temperatures. The absence of an analogous effect in the determined Nb lattice constants proves that this deviation is real as opposed to, e.g., an error in the determination of the sample temperature. We interpret it to be due to a superposition of a lattice expansion due to quenched-in disorder with a lattice contraction due to a quenched-in vacancy supersaturation. On heating, around 650K ordering kinetics become active, leading to a relaxation of the lattice expansion, while only at temperatures above 1100K vacancies become mobile enough to equilibrate their concentrations at vacancy sinks such as surfaces or grain boundaries. Thus, in this interpretation the agreement in the lattice constants of the slow-cooled and quenched states at low temperatures is just a coincidence.Figures <ref> show the temperature-dependent structure factors of NiCoMnGa and NiCoMnAl, i.e., essentially the ratio of the intensities of the B2 and L2_1 peaks to the A2 peak families after taking into account Lorentz factors and Debye-Waller factors. The theoretical structure factors for different kinds of disorder are depicted in the figures as stroked lines. In the case of NiCoMnGa, the second-order B2–L2_1 transition at 1160K is clearly visible. Additionally, this evaluation gives credence to the scenario of the observed L2_1 intensity being due to solely Mn/Ga order as opposed to Y ordering or only Ni/Co ordering. Interestingly, with increasing temperature also the degree of B2 order decreases somewhat in both systems. Also, the degrees of order on cooling are always higher than on heating, which is indicative of some amount of disorder after quenching. These qualitative conclusions seem valid even though quantitative interpretations of the data have to be treated with caution considering the limited number of crystallite grains fulfilling the Bragg condition that defines the statistical precision.§ FIRST-PRINCIPLES CALCULATIONSWe performed ab initio calculations for the structures proposed in Sect. <ref>. Specifically, we computed ordering energies and electronic densities of states (DOS) by plane-wave density functional theory as implemented in VASP (Vienna Ab-initio Simulation Package),<cit.> and magnetic interactions in the Liechtenstein approach<cit.> as implemented by Ebert et al. <cit.> in their Korringa-Kohn-Rostoker Green's function code (SPR-KKR). §.§.§ Computational detailsIn the VASP calculations, the disordered structures were realized by 432 atom supercells (corresponding to 6×6×6 bcc cells) with random occupations, taking advantage of the efficient parallelization in VASP for massively parallel computer hardware. Here, the wavefunctions of the valence electrons are described by a plane wave basis set, with the projector augmented wave approach taking care of the interaction with the core electrons.<cit.> Exchange and correlation was treated in the generalized gradient approximation using the formulation of Perdew, Burke and Ernzerhof.<cit.> We converged unit cell dimensions and atomic positions by a conjugate gradient scheme until forces and pressures reached values around 3meV/Åand 1kBar, respectively.For the structural relaxations of the disordered systems, we used a 2×2×2 Monkhorst-Pack k-mesh with the 432 atom supercells in combination with Methfessel-Paxton<cit.> Fermi surface smearing (σ=0.1eV), while total energies and densities of states were calculated by the tetrahedron method with Blöchl corrections<cit.> using a 4×4×4 k-mesh. A 17×17×17 k-mesh was employed for the ordered structures represented in a cubic 16 atom unit cell. In all our calculations we allowed for a spontaneous spin polarization, always resulting in stable ferromagnetism.In the SPR-KKR calculations, the ferromagnetic ground state was chosen as reference and disorder was treated analytically in the framework of the coherent potential approximation. The electronic density of states obtained for the different disordered structures agreed very well with the results obtained from the plane wave calculations, which corroborates our explicit supercell-based description.§.§.§ Formation energies and stable structuresThe results of our total-energy calculations are shown in Fig. <ref> and given in Tab. <ref>. In addition to the quaternary systems, we also computed the ternary full Heusler systems for use as reference energies, specifically cubic L2_1Co_2MnAl, Ni_2MnAl, and Co_2MnGa, as well as tetragonal L1_0 Ni_2MnGa, according to the martensitic transition occurring in the latter case. The energy differences are always specified with respect to the four-atom Heusler formula unit in the fully relaxed states. As expected for isoelectronic systems, the energy differences of the different phases behave similar in NiCoMnAl and NiCoMnGa. In both cases, we observe a significant gain in energy by ordering the main group element Z and Mn. As one would expect, the B2 phase is among the least favorable ones in terms of total energy, and thus its observed thermodynamic stability at high temperatures is due to its large configurational entropy. The fully disordered bcc phases turned out to be significantly higher in energy, at 1.00 eV/f.u. for NiCoMnGa and 1.07 eV/f.u. for NiCoMnAl (both without relaxation), and are therefore not included in Fig. <ref>. In contrast, the fully ordered Y structure, which has previously been proposed as a new candidate for a half metal, appears significantly more stable, not only against B2 disorder but also against decomposition into the ternary phases. However, a surprising result of our calculations is that NaCl-type ordering of Ni and Co is always disfavored compared to random disorder: this pertains both to L2_1 NiCo(MnZ), which is about 20meV higher in energy than B2 (NiCo)(MnZ), as well as the energetical gain of about 12meV when Y NiCoMnZ is disordered to L2_1 (NiCo)MnZ. Thus, considering only structural thermodynamics, the Y structure will not be thermodynamically stable at any temperature, as it has both higher internal energy as well as lower configurational entropy compared to L2_1 (NiCo)MnZ.However, the partially disordered L2_1 structure should not be the ground state. Indeed, in both NiCoMnAl and NiCoMnGa the two tetragonal structures with a four-atom unit cell and have lower energies than all structures considered up to now. Thus, our calculations identify with the alternation of Ni and Co (1,0,0) planes as the ground state structure. We are confident that, at least among the superstructures on the bcc lattice, there should be no structures with significantly lower energies, as the NaCl-type Mn/Z order with its large energy gain seems quite stable, while any Ni/Co order different from the three kinds considered here would need to rely on quite long-range interactions.We observe that the relaxation procedure yields a considerable energy gain for the disordered structures. An analysis of the corresponding atomic displacements is given in the supplementary information, evidencing an expansion of ⟨ 1,0,0⟩-coordinated pairs made up of equal atoms due to Pauli repulsion as common characteristic of the relaxations. Specifically for Mn/Z disorder, the mean bond lengths show an asymmetry, reflecting the larger size of the Z atom, particularly in the case for Z = Ga. The relaxation energies of the disordered structures as given in Tab. <ref> can be satisfactorily reproduced by assuming independent contributions of 6meV due Ni/Co disorder, 6meV due Mn/Al disorder, and 18meV due Mn/Ga disorder, with the prominence of the latter value again due to the larger size of Ga.Due to the tetragonal arrangement of the Ni and Co atoms in and , the cubic symmetry is reduced to tetragonal, which is reflected also in the lattice parameters. Specifically, as reported in Tab. <ref>, c, the lattice constant along the fourfold tetragonal axis, is 3–4% larger than a for the structure, while it is about 2% smaller for . Indeed, this behavior is expected due to above-reported tendency of ⟨ 1,0,0⟩-coordinated equal elements in L2_1 (NiCo)MnZ to be pushed apart due to Pauli repulsion, while Ni-Co pairs are contracted. Further, the Wyckoff positions 2c in space group 129, which are occupied by Mn and Z in the structure, have an internal degree of freedom z correspond to a translation along the tetragonal axis. For NiCoMnAl, the parameters are z_Mn=-0.2556 and z_Al=0.2512, and for NiCoMnGa z_Mn=-0.2555 and z_Ga=0.2511, being practically the same in both compounds. With Ni in 2a at z=0 and Co in 2b at z=1/2, this means that Mn and Z are slightly shifted away from the Ni planes. Again, this is mirrored in the increased bond lengths of ⟨12,12,12⟩-coordinated Ni-Mn and Ni-Z pairs compared to Co-Mn and Co-Z pairs under disorder as given in the supplementary information. Thus, with these small tetragonal distortions and deviations of the internal degrees of freedom from the ideal values, it is clearly appropriate to consider also the tetragonal phases as superstructures on the bcc lattice.While the tetragonal distortions as mentioned above are on the order of a few percent, the differences in the unit cell volumes between the cubic and the tetragonal structures is much smaller. Indeed, we observe that there is a nearly perfect monotonic decrease of unit cell volume with internal energy of the structures: while the volume contraction with Mn/Z ordering by values of about 0.2% for NiCoMnAl and 0.6% for NiCoMnGa was expected, and also the bigger effect in the latter case can be rationalized by the larger Ga atoms, NaCl-type Ni/Co order, which was already found to be energetically unfavorable, leads to a lattice expansion by about 0.5% in both systems. In contrast, the energy gains with and are reflected in a corresponding volume contraction.Our theoretical results explain the experimental observations: as reported above, experimentally NiCoMnGa displays the B2 phase at high temperatures with a well-defined ordering transition to the L2_1 (NiCo)MnGa phase at lower temperatures, while for NiCoMnAl the transition temperature is reduced and only barely kinetically accessible. This is reproduced by our calculations, with L2_1 (NiCo)MnZ being the lowest-energy cubic phase, while B2 can be stabilized by entropy, with indeed a larger energy gain and thus a higher expected transition temperature for Z = Ga. Ni/Co ordering always increases the internal energy and decreases configurational entropy, thus L2_1 NiCo(MnZ) and Y NiCoMnZ are not predicted to be existing. Also, as the energy cost of disordering B2 to A2 is about five times larger than the gain of ordering to L2_1 (NiCo)MnZ, with the latter happening around 1000K, we do not expect a transition to A2 in the stability range of the solid.On the other hand, also the L2_1 (NiCo)MnZ phase is only stabilized by entropy, and thus should transform at some temperature to . With an argumentation as above, where the B2–L2_1 and the L2_1–transitions have the same entropy difference, but the latter's energy difference of 42meV is about a factor of 4–5 lower, we predict a transition temperature around 250K (see the supplementary material for a more detailed discussion of these issues). As already the B2–L2_1 transition in NiCoMnAl at around 850K is only barely progressing, a bulk transition to is therefore not to be expected on accessible timescales. Extrapolating the lattice constants measured on cooling to T=0K gives 5.777Å for NiCoMnGa and 5.766Å for NiCoMnAl. The deviation of about 0.04Å to the calculated values for L2_1 (NiCo)MnZ is quite satisfactory, corresponding to a relative error of 0.7%. Of course, the difference in lattice constants between the two alloys should be predicted even much more accurately, giving 0.015Å to be compared to the value of 0.011Å determined experimentally. Further, the predicted contraction at the B2–L2_1 transition in NiCoMnGa of 0.012Å agrees perfectly with the experimental value as obtained by integrating the excess thermal expansion coefficient between 1000 and 1200K, while in NiCoMnAl the contraction with ordering is estimated as 0.003Å by the differences in the heating and cooling curves evaluated at 600K and 900K to be compared with the predicted value of 0.005Å. These two small discrepancies imply that the experimental lattice constant of NiCoMnAl at low temperatures on cooling is increased compared to the theoretical predictions, which is consistent with a reduced degree of L2_1 long-range order in NiCoMnAl due to kinetical reasons. §.§.§ MagnetismFrom the non-integer values of the total magnetic moment per formula unit listed in Tab. <ref>, one can already conclude that neither of the structures yields the desired half-metallic properties. For the fully ordered Y structure, our calculated values for M_S are slightly lower than the values of 5.0<cit.> and 5.07<cit.> previously reported for NiCoMnAl and NiCoMnGa, and the integer moment of 5 μ_ B, which follows from the generalized Slater-Pauling ruleM_S=Z_v-24 for half-metallic full-Heusler compounds with Z_v=29 valence electrons per unit cell. The calculated M_S is even smaller for the other structures. Experimentally, we measured values between 4.47 and 4.58 for L2_1 (NiCo)MnGa, while for Z = Al an increase from 4.68 in the as quenched state to 4.89 after the longest annealing was observed. Thus, it seems that the dependence of M_S on the state of order in the intermediate states is more complicated than the situation captured by our calculation of the respective extremes, corresponding to a decrease from perfect Mn/Z disorder in the B2 case to perfect order in the L2_1 case.The values in Tab. <ref> imply that the magnetic moment per formula unit M_S depends primarily on the order on the Ni/Co sublattice, with values around 4.95 for NaCl-type order, 4.60 for columnar order or disorder, and 4.45 for planar order. The equilibrium unit cell volume V_0 grows with increasing M_S, with an additional lattice expansion in the cases of Mn/Z disorder. The induced Ni moments show the largest variation between the different structures, in absolute and relative numbers. The Co moments follow the behavior of the Ni moments with a smaller variation. This is a consequence of the hybridization of Co and Ni in the minority spin density of states, which is responsible for the formation of a gap-like feature at E_F as discussed in detail in the next section. The Mn moments appear well localized with values slightly above 3.1 and vary only by a tenth of a Bohr magneton.The ferromagnetic ground state of the compounds arises from the strong ferromagnetic coupling between nearest-neighbor Mn-Ni (coupling constant approximately 7meV) and, in particular, Mn-Co (coupling constant approximately 11.7meV) pairs. On the other hand, Mn pairs in ⟨ 1,0,0⟩ coordination, which randomly occur in the B2 case, exhibit large frustrated antiferromagnetic coupling (coupling constant approximately -8meV). This behavior is well known from ternary stoichiometric and off-stoichiometric Mn-based Heusler systems.<cit.>A more detailed account of the coupling constants for NiCoMnZ is given in the supplementary material. Thus for the low-energy structures, which do not exhibit Mn pairs with negative coupling constant, we expect the magnetic ordering temperature T_C to be significantly higher than in the B2 case. This agrees nicely with the significant increase under annealing observed for NiCoMnAl, while NiCoMnGa is already L2_1-ordered in the as quenched state and thus has still a higher transition temperature.§.§.§ Electronic Structure The shape of the electronic density of states (DOS) of ternary L2_1 Heusler compounds of the type X_2YZ, including the appearance of a half-metallic gap, has been explained convincingly by Galanakis et al.<cit.> in terms of a molecular orbital picture. First, we consider the formation of molecular orbitals on the simple cubic sublattice occupied by atoms of type X. Here, the d_xy, d_yz and d_zx orbitals hybridize forming a pair of and molecular orbitals, while the d_x^2-y^2 and the d_3z^2-r^2 states form and molecular orbitals. The molecular orbitals of and symmetry can hybridize with the respective orbitals of the nearest neighbor on the Y-position (in the present case Mn), splitting up in pairs of bonding and anti-bonding hybrid orbitals. However, due to their symmetry, no partner for hybridization is available for the and orbitals, which therefore remain sharp. Accordingly, these orbitals are dubbed “non-bonding”.If the band filling is adjusted such that the Fermi level is located between the and states in one spin channel, the compound can become half-metallic. This is for instance the case for Co_2MnGe with Z_v=29, which has according to the generalized Slater–Pauling rule M_S=5 μ_B.<cit.> If additional valence electrons are made available, also the states may become occupied. This is the case for Ni_2MnGa andNi_2MnAl (Z_v=30), which do not possess half-metallic properties. Here, the Ni-states form a sharp peak just below the Fermi energy and gives rise to a band-Jahn-Teller mechanism leading to a martensitic transformation and modulated phases arising from strong electron-phonon coupling due to nesting features of the Fermi surface <cit.>. Consequently, the magnetic moments of these compounds are significantly smaller. First principles calculations report values of 3.97 – 4.22 <cit.> and 4.02 – 4.22<cit.> for Ni_2MnAl and Ni_2MnGa, respectively.Figure <ref> shows the total and element-resolved electronic densities of states (DOS) of NiCoMnGa for the most relevant structures, which have the same valence electron concentration as the half metal Co_2MnGe (NiCoMnAl shows an analogous picture and can be found in the supporting material). Here, the perfectly alternating NaCl-type order of the elements on the Ni-Co sublattice in the Y-structure enforces a complete hybridization of the Ni- and Co-states, since the atoms find only ⟨ 1,0,0⟩ neighbors of the other species. This becomes apparent from the pertinent illustration, as essentially the same features are present in the partial density of states of both elements. The magnitude of a specific peak may, however, be larger for one or the other species. This can be understood from the concept of covalent magnetism<cit.> which has been applied to Heusler alloys recently.<cit.> The molecular orbitals are occupied by each species with a weight scaling inversely with the energy difference to the constituting atomic levels. In the minority spin channel, the bonding molecular orbitals are dominated by Ni-states, while the non-bonding states around -1eV and the anti-bonding orbitals above E_F are dominated by the Co-states. The non-bonding orbitals directly above E_F are equally shared by Co and Ni states. As expected, with decreasing order the features of the DOS smear out and become less sharp. Specifically the pseudo-gap at the Fermi level in one spin channel, which corresponds to the near half-metallic behavior suggested first by Entel et al. <cit.>, and subsequently by Alijani et al. <cit.>, Singh et al. <cit.> and Halder et al. <cit.>, is in particular sensitive to ordering on the Ni-Co sublattice, and only encountered for the NaCl-type ordering of the fully ordered Y and the partially ordered L2_1 NiCo(MnZ) structures.In fact, the minority spin gap is not complete. A close inspection of the band structure (see supporting material) of Y NiCoMnGa/Al clearly shows several bands crossing the Fermi level. Since this occurs in the immediate vicinity of the Γ point, the weight of the respective states in the Brillouin zone is small and a gap-like feature appears in the DOS. Thus, in this configuration the compound should be classified as a half-semimetal rather than a half-metal. Nearly perfect gaps are observed if Z is a group IV element with a half-filled sp-shell. In our case, the missing electron of the main group element has to be compensated by the additional valence electron from one of the transition metals. These are only available on parts of the X-sites, which can lead to a distribution of sp-states between the sharp and states of the transition metals below and above E_F.In all other structures, the Ni-Co sublattice contains neighboring pairs of the same element. In this case, the respective d-orbitals can hybridize independently at different levels. As a consequence, the and molecular orbitals split up. This is best seen in the DOS of the structure. Here, the Ni-dominated part of the former peak moves to below the Fermi level (where we expect it in Ni_2MnGa/Al), which creates considerable DOS right at E_F and fully destroys the half-metallic character. An analogous argument can be applied to the cubic L2_1 (NiCo)MnZ and B2 structures with disorder on the Ni-Co sublattice (Fig. <ref>c and d). The disorder on the Mn-Z sublattice in the B2 phase causes only minor changes in the electronic structure and manifests mainly in a larger band width of the valence states and the disappearance of a pronounced peak at -3.2eV in the minority channel, which originates from the hybridization between the Ni-Co and Mn-Z sublattice. In contrast, the distribution of the Ni and Co states near the Fermi level, which are decisive for the functional properties of this compound family, is not significantly changed compared to the L2_1 case.§ CONCLUSIONSEmploying in-situ neutron diffraction, magnetic measurements and calorimetry, we studied the ordering tendencies in the quaternary Heusler derivatives NiCoMnAl and NiCoMnGa. NiCoMnGa was found to display an L2_1 (NiCo)MnGa structure with strong Mn/Ga order and no to minor Ni/Co ordering tendencies, where the degree of order achieved upon slow cooling was higher than in quenched samples. The B2–L2_1 second-order phase transition was observed at 1160K. NiCoMnAl after quenching was found to adopt the B2 structure, while on slow cooling from high temperatures broadened L2_1 reflections were observed to emerge at temperatures below 850K in neutron diffractometry. Yet, kinetics at these temperatures are so slow that the adjustment of large degrees of L2_1 order in this compound is kinetically hindered. Still, low-temperature annealing treatments at 623K in samples quenched from 1273K showed a strong effect on the magnetic transition temperatures, proving that this parameter probes sensitively the state of order in the sample.Density functional theory reproduces the experimentally observed trends of the order-dependent magnetic behavior and of the ordering tendencies between the two systems. Our calculations reveal that the fully ordered Y structure with F43m symmetry is thermodynamically not accessible, since the partially disordered L2_1 phase is lower in energy. Instead, we propose as the ground state a tetragonal structure with a planar arrangement of Ni and Co. This structure is stable against decomposition into the ternary Heusler compounds, but we expect the energetic advantage to be too small to compensate for the larger entropy of the L2_1 phase at reasonable annealing conditions. However, the fabrication of this structure by layered epitaxial growth on appropriately matching substrates, which favor the slight tetragonal distortion, could be possible. From the electronic density of states and band structure, we could conclude that neither of the structures is half-metallic in the strict definition. This specifically pertains also to the hypothetical Y structure, which exhibits several bands crossing the Fermi level close to the Γ point in the minority spin channel, and is thus a half-semimetal.Since the first quaternary Heusler derivatives adopting the Y structure have been proposed to possess half-metallic properties<cit.>, the interest in these materials has developed rapidly with numerous publications dealing with the topic.<cit.> Density-functional theory calculations have been used to identify promising systems among the NiCo-,<cit.> NiFe-<cit.> and CoFe-based<cit.> compounds. However, in most cases the phase stability of the Y structure is tested, if at all, only against stacking order variations of this Y structure (see, for instance, Refs. Alijani2011b,Dai2009) but rarely against disorder<cit.> or other states of order. Simultaneously, experimental investigations as a rule either point towards disordered structures <cit.> or, specifically for the case of X-ray diffraction on ordering between transition metal elements, cannot decide these issues.<cit.>Based on our findings, we conclude that at least in the NiCo-based, but probably also in the NiFe- and CoFe-based alloys, the stability of the Y-structure is doubtful and, even if it was thermodynamically stable, might still not be kinetically accessible in most quaternary Heusler derivatives. Indeed, preliminary first-principles results show that also for the NiFeMnGa and the CoFeMnGa alloys, the tetragonal order is lower in energy than the Y structure by 62 meV/f.u and 80 meV/f.u., respectively. This underlines that a detailed analysis of phase stabilities in those systems that have been identified as promising half-metals, especially with respect to the tetragonal structures and/or L2_1 type disorder, is essential in order to evaluate their actual potential. More generally, the comparatively small energetical differences between the various possible types of order along with small disordering energies specifically with respect to the late transition metal constituents as obtained here suggest that in these quaternary Heusler derivatives disorder could be the norm rather than the exception in physical reality.§ ACKNOWLEDGMENTSThis work was funded by the Deutsche Forschungsgemeinschaft (DFG) within the Transregional Collaborative Research Center TRR 80 “From electronic correlations to functionality”. P.N. acknowledges additional support from the Japanese Society for the Promotion of Science (JSPS) via a short-term doctoral scholarship for research in Japan. We thank O. Dolotko and A. Senyshyn of the MLZ for facilitating the neutron diffraction measurements. SQUID measurements were performed at the Center for Low Temperature Science, Institute for Materials Research, Tohoku University. Computing resources for the supercell calculations were kindly provided by the Center for Computational Sciences and Simulation (CCSS) at University of Duisburg-Essen on the supercomputer magnitUDE (DFG grants INST 20876/209-1 FUGG and INST 20876/243-1 FUGG).87 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Heusler(1903)]heuslerdpg1903 author author F. Heusler, in @noopbooktitle Verhandlungen der Deutschen Physikalischen Gesellschaft, Vol. volume 5(year 1903) p. pages 219NoStop [Graf et al.(2011)Graf, Felser, and Parkin]Graf2011 author author T. Graf, author C. Felser,andauthor S. S. Parkin, 10.1016/j.progsolidstchem.2011.02.001 journal journal Prog. Solid State Chem. volume 39, pages 1 (year 2011)NoStop [Sozinov et al.(2002)Sozinov, Likhachev, Lanska, andUllakko]Sozinov2002 author author A. Sozinov, author A. A. Likhachev, author N. Lanska, and author K. Ullakko, 10.1063/1.1458075 journal journal Appl. Phys. Lett. volume 80, pages 1746 (year 2002)NoStop [Krenke et al.(2005)Krenke, Duman, Acet, Wassermann, Moya, Mañosa, and Planes]Krenke2005 author author T. Krenke, author E. Duman, author M. Acet, author E. F. Wassermann, author X. Moya, author L. Mañosa,and author A. Planes, 10.1038/nmat1395 journal journal Nature Mater. volume 4, pages 450 (year 2005)NoStop [Jullière(1975)]Julliere1975 author author M. Jullière, 10.1016/0375-9601(75)90174-7 journal journal Phys. Lett. A volume 54, pages 225(year 1975)NoStop [de Groot et al.(1983)de Groot, Mueller, van Engen, andBuschow]Groot1983 author author R. A. de Groot, author F. M. Mueller, author P. G. van Engen,and author K. H. J. Buschow, 10.1103/PhysRevLett.50.2024 journal journal Phys. Rev. Lett. volume 50, pages 2024 (year 1983)NoStop [Ishida et al.(1982)Ishida, Akazawa, Kubo, and Ishida]Ishida1982 author author S. Ishida, author S. Akazawa, author Y. Kubo,and author J. Ishida, 10.1088/0305-4608/12/6/012 journal journal J. Phys. F: Met. Phys. volume 12, pages 1111 (year 1982)NoStop [Fujii et al.(1990)Fujii, Sugimura, Ishida, and Asano]Fujii1990 author author S. Fujii, author S. Sugimura, author Ishida,and author S. Asano, 10.1088/0953-8984/2/43/004 journal journal J. Phys.: Condens. Matter volume 2, pages 8583 (year 1990)NoStop [Jourdan et al.(2014)Jourdan, Minár, Braun, Kronenberg, Chadov, Balke, Gloskovskii, Kolbe, Elmers, Schönhense, Ebert, Felser, andKläui]Jourdan2014 author author M. Jourdan, author J. Minár, author J. Braun, author A. Kronenberg, author S. Chadov, author B. Balke, author A. Gloskovskii, author M. Kolbe, author H. J.Elmers, author G. Schönhense, author H. Ebert, author C. Felser, and author M. Kläui, 10.1038/ncomms4974 journal journal Nat. Commun. volume 5 (year 2014),10.1038/ncomms4974NoStop [Halder et al.(2015)Halder, Mukadam, Suresh, and Yusuf]Halder2015 author author M. Halder, author M. D. Mukadam, author K. G. Suresh,andauthor S. M. Yusuf, 10.1016/j.jmmm.2014.10.107 journal journal J. Magn. Magn. Mater. volume 377,pages 220(year 2015)NoStop [Alijani et al.(2011a)Alijani, Winterlik, Fecher, Naghavi, andFelser]Alijani2011 author author V. Alijani, author J. Winterlik, author G. H. Fecher, author S. S. Naghavi,and author C. Felser, 10.1103/PhysRevB.83.184428 journal journal Phys. Rev. B volume 83, pages 184428 (year 2011a)NoStop [Özdoğan et al.(2013)Özdoğan, Şaşıoğlu, andGalanakis]Ozdougan2013 author author K. Özdoğan, author E. Şaşıoğlu,and author I. Galanakis, 10.1063/1.4805063 journal journal J. Appl. Phys. volume 113, pages 193903 (year 2013)NoStop [Okubo et al.(2011)Okubo, Xu, Umetsu, Kanomata, Ishida, and Kainuma]Okubo2011 author author A. Okubo, author X. Xu, author R. Y. Umetsu, author T. Kanomata, author K. Ishida,and author R. Kainuma, 10.1063/1.3559536 journal journal J. Appl. Phys. volume 109, pages 07B114 (year 2011)NoStop [Kanomata et al.(2009)Kanomata, Kitsunai, Sano, Furutani, Nishihara, Umetsu, Kainuma, Miura, and Shirai]Kanomata2009 author author T. Kanomata, author Y. Kitsunai, author K. Sano, author Y. Furutani, author H. Nishihara, author R. Y. Umetsu, author R. Kainuma, author Y. Miura,and author M. Shirai, 10.1103/PhysRevB.80.214402 journal journal Phys. Rev. B volume 80, pages 214402 (year 2009)NoStop [Recarte et al.(2012)Recarte, Pérez-Landazábal, and Sánchez-Alarcos]Recarte2012 author author V. Recarte, author J. I. Pérez-Landazábal,and author V. Sánchez-Alarcos, 10.1016/j.jallcom.2011.11.053 journal journal J. Alloys Comp. volume 536, pages S5308 (year 2012)NoStop [Barandiaran et al.(2013)Barandiaran, Chernenko., Cesari, Salas, Gutierrez, and Lazpitza]Barandiaran2013b author author J. M. Barandiaran, author V. A. Chernenko., author E. Cesari, author D. Salas, author J. Gutierrez,and author P. Lazpitza, 10.1088/0953-8984/25/48/484005 journal journal J. Phys.: Condens. Matter volume 25, pages 484005 (year 2013)NoStop [Monroe et al.(2015)Monroe, Raymond, Xu, Nagasako, Kainuma, Chumlyakov, Arroyave, and Karaman]Monroe2015 author author J. A. Monroe, author J. E. Raymond, author X. Xu, author N. Nagasako, author R. Kainuma, author Y. I. Chumlyakov, author R. Arroyave,and author I. Karaman, 10.1016/j.actamat.2015.08.049 journal journal Acta Mater. volume 101, pages 107 (year 2015)NoStop [Çakır et al.(2016)Çakır, Acet, and Farle]Cakir2016 author author A. Çakır, author M. Acet,and author M. Farle, 10.1038/srep28931 journal journal Sci. Rep. volume 6, pages 28931 (year 2016)NoStop [Ruban and Abrikosov(2008)]Ruban2008 author author A. V. Ruban and author I. A. Abrikosov, 10.1088/0034-4885/71/4/046501 journal journal Rep. Prog. Phys. volume 71, pages 046501 (year 2008)NoStop [Note1()]Note1 note We neglect here the martensitic transitions below room temperature.Stop [Sánchez-Alarcos et al.(2007)Sánchez-Alarcos, Recarte, Pérez-Landazábal, and Cuello]SanchezAlarcos2007 author author V. Sánchez-Alarcos, author V. Recarte, author J. I. Pérez-Landazábal,and author G. J.Cuello, 10.1016/j.actamat.2007.03.001 journal journal Acta Mater. volume 55, pages 3883(year 2007)NoStop [Acet et al.(2002)Acet, Duman, Wassermann, Mañosa, and Planes]Acet2002 author author M. Acet, author E. Duman, author E. F. Wassermann, author L. Mañosa,andauthor A. Planes, 10.1063/1.1504498 journal journal J. Appl. Phys. volume 92, pages 3867 (year 2002)NoStop [Webster(1971)]websterjphyschemsolids1971 author author P. J. Webster, 10.1016/S0022-3697(71)80180-4 journal journal J. Phys. Chem. Solids volume 32, pages 1221 (year 1971)NoStop [Note2()]Note2 note We do not consider the other two B2 possibilities (NiMn)(CoZ) and (NiZ)(CoMn).Stop [Pauly et al.(1968)Pauly, Weiss, and Witte]Pauly1968 author author H. Pauly, author A. Weiss,andauthor H. Witte, @noopjournal journal Z. Metallkd. volume 59, pages 47 (year 1968)NoStop [Bacon and Plant(1971)]Bacon1971 author author G. E. Bacon and author J. S. Plant, 10.1088/0305-4608/1/4/325 journal journal J. Phys. F: Met. Phys. volume 1, pages 524 (year 1971)NoStop [Eberz et al.(1980)Eberz, Seelentag, and Schuster]Eberz1980 author author U. Eberz, author W. Seelentag, and author H. Schuster, 10.1515/znb-1980-1103 journal journal Z. Naturforsch. B volume 35, pages 1341 (year 1980)NoStop [Johnson and Jeitschko(1974)]Johnson1974 author author V. Johnson and author W. Jeitschko, http://dx.doi.org/10.1016/0022-4596(74)90111-X journal journal J. Solid State Chem. volume 11, pages 161(year 1974)NoStop [Neibecker et al.(2014)Neibecker, Leitner, Benka, andPetry]Neibecker2014 author author P. Neibecker, author M. Leitner, author G. Benka,and author W. Petry, 10.1063/1.4905223 journal journal Appl. Phys. Lett. volume 105, pages 261904 (year 2014)NoStop [Hoelzel et al.(2015)Hoelzel, Senyshyn, and Dolotko]SPODI2015 author author M. Hoelzel, author A. Senyshyn, and author O. Dolotko, 10.17815/jlsrf-1-24 journal journal J. Large-Scale Res. Facil. volume 1, pages A5 (year 2015)NoStop [Hoelzel et al.(2012)Hoelzel, Senyshyn, Juenke, Boysen, Schmahl, and Fuess]Hoelzel2012 author author M. Hoelzel, author A. Senyshyn, author N. Juenke, author H. Boysen, author W. Schmahl,and author H. Fuess, 10.1016/j.nima.2011.11.070 journal journal Nucl. Instrum. Methods A volume 667, pages 32(year 2012)NoStop [Ishikawa et al.(2008)Ishikawa, Umetsu, Kobayashi, Fujita, Kainuma, and Ishida]Ishikawa2008 author author H. Ishikawa, author R. Y. Umetsu, author K. Kobayashi, author A. Fujita, author R. Kainuma,and author K. Ishida, 10.1016/j.actamat.2008.05.034 journal journal Acta Mater. volume 56, pages 4789 (year 2008)NoStop [Umetsu et al.(2011)Umetsu, Ishikawa, Kobayashi, Fujita, Ishida, and Kainuma]Umetsu2011 author author R. Y. Umetsu, author H. Ishikawa, author K. Kobayashi, author A. Fujita, author K. Ishida,and author R. Kainuma, 10.1016/j.scriptamat.2011.03.014 journal journal Scripta Mater. volume 65, pages 41 (year 2011)NoStop [Kainuma et al.(2000)Kainuma, Gejima, Sutou, Ohnuma, and Ishida]Kainuma2000 author author R. Kainuma, author F. Gejima, author Y. Sutou, author I. Ohnuma,and author K. Ishida, 10.2320/matertrans1989.41.943 journal journal Mat. Trans. JIM volume 41, pages 943 (year 2000)NoStop [Ziebeck and Webster(1975)]Ziebeck1975 author author K. R. A.Ziebeck and author P. J.Webster, 10.1088/0305-4608/5/9/015 journal journal J. Phys. F: Met. Phys. volume 5, pages 1756 (year 1975)NoStop [Kresse and Furthmüller(1996)]VASP1 author author G. Kresse and author J. Furthmüller, 10.1103/PhysRevB.54.11169 journal journal Phys. Rev. B volume 54, pages 11169 (year 1996)NoStop [Liechstenstein et al.(1987)Liechstenstein, Katsnelson, Antropov, andGubanov]Liechtenstein1987 author author A. I. Liechstenstein, author M. I. Katsnelson, author V. P. Antropov,and author V. A. Gubanov, 10.1016/0304-8853(87)90721-9 journal journal J. Magn. Magn. Mater. volume 67, pages 65 (year 1987)NoStop [Ebert et al.(2011)Ebert, Ködderitzsch, and Minár]Ebert2011 author author H. Ebert, author D. Ködderitzsch,and author J. Minár, 10.1088/0034-4885/74/9/096501 journal journal Rep. Prog. Phys. volume 74, pages 096501 (year 2011)NoStop [Kresse and Joubert(1999)]VASP2 author author G. Kresse and author D. Joubert, 10.1103/PhysRevB.59.1758 journal journal Phys. Rev. B volume 59,pages 1758 (year 1999)NoStop [Perdew et al.(1996)Perdew, Burke, and Ernzerhof]Perdew1996 author author J. P. Perdew, author K. Burke, and author M. Ernzerhof, 10.1103/PhysRevLett.77.3865 journal journal Phys. Rev. Lett. volume 77, pages 3865 (year 1996)NoStop [Methfessel and Paxton(1989)]Methfessel1989 author author M. Methfessel and author A. T. Paxton, 10.1103/PhysRevB.40.3616 journal journal Phys. Rev. B volume 40,pages 3616 (year 1989)NoStop [Blöchl et al.(1994)Blöchl, Jepsen, and Andersen]Bloechl1994 author author P. E. Blöchl, author O. Jepsen, and author O. K. Andersen,10.1103/PhysRevB.49.16223 journal journal Phys. Rev. B volume 49, pages 16223 (year 1994)NoStop [Şaşıoğlu et al.(2004)Şaşıoğlu, Sandratskii, and Bruno]Sasioglu2004 author author E. Şaşıoğlu, author L. M.Sandratskii,and author P. Bruno, 10.1103/PhysRevB.70.024427 journal journal Phys. Rev. B volume 70, pages 024427 (year 2004)NoStop [Kurtulus et al.(2005)Kurtulus, Dronskowski, Samolyuk, andAntropov]Kurtulus2005 author author Y. Kurtulus, author R. Dronskowski, author G. D. Samolyuk,and author V. P. Antropov, 10.1103/PhysRevB.71.014425 journal journal Phys. Rev. B volume 71, pages 014425 (year 2005)NoStop [Şaşıoğlu et al.(2005)Şaşıoğlu, Sandratskii, Bruno, and Galanakis]Sasioglu2005 author author E. Şaşıoğlu, author L. M.Sandratskii, author P. Bruno,and author I. Galanakis, 10.1103/PhysRevB.72.184415 journal journal Phys. Rev. B volume 72, pages 184415 (year 2005)NoStop [Rusz et al.(2006)Rusz, Bergqvist, Kudrnovský, and Turek]Rusz2006 author author J. Rusz, author L. Bergqvist, author J. Kudrnovský,andauthor I. Turek, 10.1103/PhysRevB.73.214412 journal journal Phys. Rev. B volume 73, pages 214412 (year 2006)NoStop [Buchelnikov et al.(2008)Buchelnikov, Entel, Taskaev, Sokolovskiy, Hucht, Ogura, Akai, Gruner, and Nayak]Buchelnikov2008 author author V. D. Buchelnikov, author P. Entel, author S. V. Taskaev, author V. V. Sokolovskiy, author A. Hucht, author M. Ogura, author H. Akai, author M. E.Gruner,and author S. K.Nayak, 10.1103/PhysRevB.78.184427 journal journal Phys. Rev. B volume 78, pages 184427 (year 2008)NoStop [Şaşıoğlu et al.(2008)Şaşıoğlu, Sandratskii, and Bruno]Sasioglu2008 author author E. Şaşıoğlu, author L. M.Sandratskii,and author P. Bruno, 10.1103/PhysRevB.77.064417 journal journal Phys. Rev. B volume 77, pages 064417 (year 2008)NoStop [Sokolovskiy et al.(2012)Sokolovskiy, Buchelnikov, Zagrebin, Entel, Sahoo, and Ogura]Sokolovskiy2012 author author V. V. Sokolovskiy, author V. D. Buchelnikov, author M. A. Zagrebin, author P. Entel, author S. Sahoo,and author M. Ogura, 10.1103/PhysRevB.86.134418 journal journal Phys. Rev. B volume 86, pages 134418 (year 2012)NoStop [Comtesse et al.(2014)Comtesse, Gruner, Ogura, Sokolovskiy, Buchelnikov, Grünebohm, Arróyave, Singh, Gottschall, Gutfleisch, Chernenko, Albertini, Fähler, and Entel]Comtesse2014 author author D. Comtesse, author M. E. Gruner, author M. Ogura, author V. V. Sokolovskiy, author V. D. Buchelnikov, author A. Grünebohm, author R. Arróyave, author N. Singh, author T. Gottschall, author O. Gutfleisch, author V. A.Chernenko, author F. Albertini, author S. Fähler,and author P. Entel, 10.1103/PhysRevB.89.184403 journal journal Phys. Rev. B volume 89,pages 184403 (year 2014)NoStop [Entel et al.(2014)Entel, Gruner, Comtesse, Sokolovskiy, and Buchelnikov]Entel2014 author author P. Entel, author M. E. Gruner, author D. Comtesse, author V. V. Sokolovskiy,and author V. D. Buchelnikov, 10.1002/pssb.201451059 journal journal phys. stat. sol. (b) volume 251, pages 2135 (year 2014)NoStop [Galanakis et al.(2002)Galanakis, Dederichs, and Papanikolaou]Galanakis2002 author author I. Galanakis, author P. H. Dederichs,and author N. Papanikolaou, 10.1103/PhysRevB.66.174429 journal journal Phys. Rev. B volume 66, pages 174429 (year 2002)NoStop [Galanakis et al.(2006)Galanakis, Mavropoulos, and Dederichs]Galanakis2006 author author I. Galanakis, author P. Mavropoulos,and author P. H. Dederichs, 10.1088/0022-3727/39/5/S01 journal journal J. Phys. D: Appl. Phys. volume 39, pages 765 (year 2006)NoStop [Picozzi et al.(2002)Picozzi, Continenza, and Freeman]Picozzi2002 author author S. Picozzi, author A. Continenza,and author A. J. Freeman,10.1103/PhysRevB.66.094421 journal journal Phys. Rev. B volume 66, pages 094421 (year 2002)NoStop [Brown et al.(1999)Brown, Bargawi, Crangle, Neumann,and Ziebeck]Brown1999 author author P. J. Brown, author A. Y. Bargawi, author J. Crangle, author K.-U. Neumann,and author K. R. A. Ziebeck, 10.1088/0953-8984/11/24/312 journal journal J. Phys.: Condens. Matter volume 11,pages 4715 (year 1999)NoStop [Lee et al.(2002)Lee, Rhee, and Harmon]Lee2002 author author Y. Lee, author J. Y. Rhee, and author B. N. Harmon,10.1103/PhysRevB.66.054424 journal journal Phys. Rev. B volume 66, pages 054424 (year 2002)NoStop [Bungaro et al.(2003)Bungaro, Rabe, and Dal Corso]Bungaro2003 author author C. Bungaro, author K. M. Rabe, and author A. Dal Corso, 10.1103/PhysRevB.68.134104 journal journal Phys. Rev. B volume 68, pages 134104 (year 2003)NoStop [Zayak et al.(2003)Zayak, Entel, Enkovaara, Ayuela,and Nieminen]Zayak2003 author author A. T. Zayak, author P. Entel, author J. Enkovaara, author A. Ayuela,and author R. M. Nieminen, 10.1103/PhysRevB.68.132402 journal journal Phys. Rev. B volume 68, pages 132402 (year 2003)NoStop [Opeil et al.(2008)Opeil, Mihaila, Schulze, Mañosa, Planes, Hults, Fisher, Riseborough, Littlewood, Smith, and Lashley]Opeil2008 author author C. P. Opeil, author B. Mihaila, author R. K. Schulze, author L. Mañosa, author A. Planes, author W. L. Hults, author R. A.Fisher, author P. S.Riseborough, author P. B.Littlewood, author J. L.Smith,and author J. C.Lashley, 10.1103/PhysRevLett.100.165703 journal journal Phys. Rev. Lett. volume 100, pages 165703 (year 2008)NoStop [Haynes et al.(2012)Haynes, Watts, Laverock, Major, Alam, Taylor, Duffy, andDugdale]Haynes2012 author author T. D. Haynes, author R. J. Watts, author J. Laverock, author Z. Major, author M. A. Alam, author J. W. Taylor, author J. A. Duffy,and author S. B. Dugdale, 10.1088/1367-2630/14/3/035020 journal journal New J. Phys. volume 14, pages 035020 (year 2012)NoStop [Ayuela et al.(1999)Ayuela, Enkovaara, Ullakko, and Nieminen]Ayuela1999 author author A. Ayuela, author J. Enkovaara, author K. Ullakko,andauthor R. Nieminen, 10.1088/0953-8984/11/8/014 journal journal J. Phys.: Condens. Matter volume 11,pages 2017 (year 1999)NoStop [Godlevsky and Rabe(2001)]Godlevsky2001 author author V. V. Godlevsky and author K. M. Rabe, 10.1103/PhysRevB.63.134407 journal journal Phys. Rev. B volume 63,pages 134407 (year 2001)NoStop [Fujii et al.(1989)Fujii, Ishida, and Asano]Fujii1989 author author S. Fujii, author S. Ishida, and author S. Asano, 10.1143/JPSJ.58.3657 journal journal J. Phys. Soc. Japan volume 58, pages 3657 (year 1989)NoStop [Ayuela et al.(2002)Ayuela, Enkovaara, and Nieminen]Ayuela2002 author author A. Ayuela, author J. Enkovaara, and author R. Nieminen, 10.1088/0953-8984/14/21/307 journal journal J. Phys.: Condens. Matter volume 14,pages 5325 (year 2002)NoStop [Gruner et al.(2008)Gruner, Adeagbo, Zayak, Hucht, Buschmann, and Entel]Gruner2008 author author M. E. Gruner, author W. A. Adeagbo, author A. T. Zayak, author A. Hucht, author S. Buschmann,and author P. Entel, 10.1140/epjst/e2008-00675-1 journal journal Eur. Phys. J. Special Topics volume 158, pages 193 (year 2008)NoStop [Williams et al.(1981)Williams, Zeller, Moruzzi, Gelatt, and Kübler]Williams1981 author author A. R. Williams, author R. Zeller, author V. L. Moruzzi, author C. D. Gelatt, Jr.,and author J. Kübler,10.1063/1.329617 journal journal J. Appl. Phys. volume 52, pages 2067 (year 1981)NoStop [Schwarz et al.(1984)Schwarz, Mohn, Blaha, and Kübler]Schwarz1984 author author K. Schwarz, author P. Mohn, author P. Blaha,and author J. Kübler, 10.1088/0305-4608/14/11/021 journal journal J. Phys. F: Met. Phys. volume 14, pages 2659 (year 1984)NoStop [Mohn(2003)]Mohn2006 author author P. Mohn, @nooptitle Magnetism in the Solid State, series Springer Series in Solid-State Sciences, Vol.volume 134 (publisher Springer, address Berlin, Heidelberg, year 2003)NoStop [Dannenberg et al.(2010)Dannenberg, M. Siewert, Wuttig, andEntel]Dannenberg2010 author author A. Dannenberg, author M. E. G. M. Siewert, author M. Wuttig,and author P. Entel, 10.1103/PhysRevB.82.214421 journal journal Phys. Rev. B volume 82, pages 214421 (year 2010)NoStop [Entel et al.(2010)Entel, Gruner, Dannenberg, Siewert, Nayak, Herper, and Buchelnikov]Entel2010 author author P. Entel, author M. E. Gruner, author A. Dannenberg, author M. Siewert, author S. K. Nayak, author H. C. Herper,and author V. D. Buchelnikov, 10.4028/www.scientific.net/MSF.635.3 journal journal Mater. Sci. Forum volume 635, pages 3 (year 2010)NoStop [Singh et al.(2012)Singh, Saini, and Kashyap]Singh2012 author author M. Singh, author H. S. Saini, and author M. K. Kashyap,10.4028/www.scientific.net/AMR.585.270 journal journal Adv. Mater. Res. volume 585, pages 270 (year 2012)NoStop [Dai et al.(2009)Dai, Liu, Fecher, Felser, Li, and Liu]Dai2009 author author X. Dai, author G. Liu, author G. H. Fecher, author C. Felser, author Y. Li,and author H. Liu, 10.1063/1.3062812 journal journal J. Appl. Phys. volume 105, pages 07E901 (year 2009)NoStop [Gökoğlu(2012)]Goekoglu2012 author author G. Gökoğlu, 10.1016/j.solidstatesciences.2012.07.013 journal journal Solid State Sci. volume 14, pages 1273(year 2012)NoStop [Al-zyadi et al.(2015)Al-zyadi, Gao, and Yao]Alzyadi2015 author author J. M. K.Al-zyadi, author G. Y.Gao,and author K.-L.Yao, 10.1016/j.jmmm.2014.11.012 journal journal J. Magn. Magn. Mater. volume 378, pages 1 (year 2015)NoStop [Wei et al.(2015)Wei, Zhang, Chu, Sun, Sun, Guo, and Deng]Wei2015 author author X.-P. Wei, author Y.-L. Zhang, author Y.-D. Chu, author X.-W. Sun, author T. Sun, author P. Guo,and author J.-B.Deng, 10.1016/j.jpcs.2015.03.003 journal journal J. Phys. Chem. Solids volume 82, pages 28(year 2015)NoStop [Mukadam et al.(2016)Mukadam, Roy, Meena, Bhatt, and Yusuf]Mukadam2016 author author M. D. Mukadam, author S. Roy, author S. S. Meena, author P. Bhatt,and author S. M. Yusuf, 10.1103/PhysRevB.94.214423 journal journal Phys. Rev. B volume 94, pages 214423 (year 2016)NoStop [Alijani et al.(2011b)Alijani, Ouardi, Fecher, Winterlik, Naghavi, Kozina, Stryganyuk, Felser, Ikenaga, Yamashita, Ueda,and Kobayashi]Alijani2011b author author V. Alijani, author S. Ouardi, author G. H. Fecher, author J. Winterlik, author S. S. Naghavi, author X. Kozina, author G. Stryganyuk, author C. Felser, author E. Ikenaga, author Y. Yamashita, author S. Ueda,and author K. Kobayashi, 10.1103/PhysRevB.84.224416 journal journal Phys. Rev. B volume 84, pages 224416 (year 2011b)NoStop [Gao et al.(2013)Gao, Hu, Yao, Luo, andLiu]Gao2013 author author G. Y. Gao, author L. Hu, author K. L. Yao, author B. Luo,and author N. Liu, 10.1016/j.jallcom.2012.11.077 journal journal J. Alloys Comp. volume 551, pages 539(year 2013)NoStop [Zhang et al.(2014)Zhang, Liu, Li, Ma, andLiu]Zhang2014 author author Y. J. Zhang, author Z. H. Liu, author G. T. Li, author X. Q. Ma,and author G. D. Liu, 10.1016/j.jallcom.2014.07.165 journal journal J. Alloys Comp. volume 616, pages 449(year 2014)NoStop [Bainsla et al.(2014)Bainsla, Suresh, Nigam, Manivel Raja, Varaprasad, Takahashi, andHono]Bainsla2014 author author L. Bainsla, author K. G. Suresh, author A. K. Nigam, author M. Manivel Raja, author B. S. D. C. S. Varaprasad, author Y. K. Takahashi,and author K. Hono, 10.1063/1.4902831 journal journal J. Appl. Phys. volume 116, pages 203902 (year 2014)NoStop [Xiong et al.(2014)Xiong, Yi, and Gao]Xiong2014 author author L. Xiong, author L. Yi,andauthor G. Y. Gao, 10.1016/j.jmmm.2014.02.050 journal journal J. Magn. Magn. Mater. volume 360,pages 98(year 2014)NoStop [Enamullah et al.(2015)Enamullah, Venkateswara, Gupta, Varma, Singh, Suresh, andAlam]Enamullah2015 author author Enamullah, author Y. Venkateswara, author S. Gupta, author M. R. Varma, author P. Singh, author K. G.Suresh,and author A. Alam, 10.1103/PhysRevB.92.224413 journal journal Phys. Rev. B volume 92, pages 224413 (year 2015)NoStop [Feng et al.(2015)Feng, Chen, Yuan, Zhou, andChen]Feng2015 author author Y. Feng, author H. Chen, author H. Yuan, author Y. Zhou,and author X. Chen, 10.1016/j.jmmm.2014.11.028 journal journal J. Magn. Magn. Mater. volume 378, pages 7(year 2015)NoStop [Gao et al.(2015)Gao, Li, Lei, Deng, andHu]Gao2015 author author Q. Gao, author L. Li, author G. Lei, author J.-B. Deng,and author X.-R. Hu, 10.1016/j.jmmm.2014.12.025 journal journal J. Magn. Magn. Mater. volume 379, pages 288(year 2015)NoStop [Elahmar et al.(2015)Elahmar, Rached, Rached, Khenata, Murtaza, Bin Omran, andAhmed]Elahmar2015 author author M. H. Elahmar, author H. Rached, author D. Rached, author R. Khenata, author R. Murtaza, author S. Bin Omran,and author W. K. Ahmed, 10.1016/j.jmmm.2015.05.019 journal journal J. Magn. Magn. Mater. volume 393, pages 165(year 2015)NoStop [Enamullah et al.(2016)Enamullah, Johnson, Suresh, andAlam]Enamullah2016 author author Enamullah, author D. D. Johnson, author K. G. Suresh,and author A. Alam, 10.1103/PhysRevB.94.184102 journal journal Phys. Rev. B volume 94, pages 184102 (year 2016)NoStop [Berri et al.(2014)Berri, Maouche, Ibrir, and Zerarga]Berri2014 author author S. Berri, author D. Maouche, author M. Ibrir,and author F. Zerarga, 10.1016/j.jmmm.2013.10.044 journal journal J. Magn. Magn. Mater. volume 354, pages 65(year 2014)NoStop
http://arxiv.org/abs/1704.08100v1
{ "authors": [ "Pascal Neibecker", "Markus E. Gruner", "Xiao Xu", "Ryosuke Kainuma", "Winfried Petry", "Rossitza Pentcheva", "Michael Leitner" ], "categories": [ "cond-mat.str-el" ], "primary_category": "cond-mat.str-el", "published": "20170426133030", "title": "Ordering tendencies and electronic properties in quaternary Heusler derivatives" }
INAF, Osservatorio Astronomico di Bologna, via Gobetti 93/3, 40129 Bologna, ItalyDipartimento di Astronomia, Università degli Studi di Bologna, via Gobetti 93/2, 40129 Bologna, Italy INAF, Osservatorio Astronomico di Brera, via Brera 28, 20121 Milano, Italy Department of Astronomy and Astrophysics, 525 Davey Lab, The Pennsylvania State University, University Park, PA 16802, USAInstitute for Gravitation and the Cosmos, The Pennsylvania State University, University Park, PA 16802, USA Department of Physics, 104 Davey Laboratory, The Pennsylvania State University, University Park, PA 16802, USA We present a systematic analysis of X-ray archival data of all the 29 quasars (QSOs) at z > 5.5 observed so far with Chandra, XMM-Newton and Swift-XRT, including the most-distant quasar ever discovered, ULAS J1120+0641 (z = 7.08). This study allows us to place constraints on the mean spectral properties of the primordial population of luminous Type 1 (unobscured) quasars.Eighteen quasars are detected in the X-ray band, and we provide spectral-fitting results for their X-ray properties, while for the others we provide upper limits to their soft (0.5-2.0 keV) X-ray flux. We measured the power-law photon index and derived an upper limit to the column density for the five quasars (J1306+0356, J0100+2802, J1030+0524, J1148+5251, J1120+0641) with the best spectra (> 30 net counts in the 0.5-7.0 keV energy range) and find that they are consistent with values from the literature and lower-redshift quasars. By stacking the spectra of ten quasars detected by Chandra in the redshift range 5.7 ≤ z ≤ 6.1 we find a mean X-ray power-law photon index of Γ = 1.92_-0.27^+0.28 and a neutral intrinsic absorption column density of N_H ≤ 10^23 cm^-2. These results suggest that the X-ray spectral properties of luminous quasars have not evolved up to z ≈ 6. We also derived the optical-X-ray spectral slopes (α_ox) of our sample and combined them with those of previous works, confirming that α_ox strongly correlates with UV monochromatic luminosity at 2500 Å. These results strengthen the non-evolutionary scenario for the spectral properties of luminous active galactic nuclei (AGN).The X-ray properties of z∼6 luminous quasars R. Nanni 1,2 C. Vignali 1,2 R. Gilli 1 A. Moretti 3 W. N. Brandt 4,5,6================================================================================================================================================================================ § INTRODUCTION Active galactic nuclei (AGN) are one of the best probes of the primordial Universe at the end of the dark ages. Studying the properties of z ∼ 6 quasars is important to understand the formation and early evolution of supermassive black holes (SMBHs) and their interaction with the host galaxy. The presence of SMBHs, 10^8 - 10^9 M_⊙, observed in quasars (QSOs) up to z = 6-7 (e.g., ; ), and hence formed in less than 1 Gyr, is a challenge for modern astrophysics. In order to explain these SMBH masses, accretion of gas must have proceeded almost continuously close to the Eddington limit with fairly low radiative efficiency (η < 0.1). The seeds of the observed SMBHs could either be the remnants of PopIII stars (100 M_⊙; e.g., ), or more massive (10^4-6 M_⊙) BHs formed from the direct collapse of primordial gas clouds (e.g., ). In the case of lower-mass seeds (PopIII stars), super-Eddington accretion is likely required to form the black-hole masses of z ∼ 6 QSOs (e.g., ; ; ). As of today, 198 QSOs have been discovered at redshift z > 5.5 with wide-area optical and IR surveys (e.g.,; ; ; ; ; ; ). In particular, wide-area near-IR surveys are now pushing the QSO redshift frontier to z > 6.4. Eight of the 198 QSOs were selected using Spectral Energy Distribution (SED) model fitting to photometric data, and then spectroscopically confirmed (). Only a few of these 198 QSOs have been studied through their X-ray emission (e.g., ; ; ; ; ; ; ). These studies showed that the X-ray spectral properties of high-redshift quasars do not differ significantly from those of AGN at lower redshift. This is generally consistent with observations showing that the broad-band SEDs and the rest-frame IR/optical/UV spectra of quasars have not significantly evolved over cosmic time (e.g., ; ), with a few notable exceptions for the IR band (e.g., ). In this work we provide a systematic analysis of all X-ray data available for the 29 out to 198 QSOs, that were observed by Chandra, XMM-Newton, and Swift-XRT in order to derive the general properties of accretion onto SMBHs at very high redshift. While the X-ray spectral properties of z < 5 quasars are now well established, the situation for quasars at the highest redshifts is not so clear. In our work we present the most up-to-date and complete X-ray study of the population of quasars in the redshift range 5.5 ≤ z ≤ 7.1 by which we managed to place constraints on the X-ray properties of primordial AGN.The paper is organized as follows. In 2 we describe theX-ray archival data and their reduction procedure. The data analysis is presented in 3, where we also provide a detailed spectral study for those sources with higher photon statistics (> 30 net counts, i.e.,background-subtracted, in the 0.5-7.0 keV energy band). In 4 we discuss the mean X-ray properties of our sample, and in 5 we provide estimates of the optical-X-ray spectral slope. In 6 we give a summary of our results. Throughout this paper we assume H_0 = 70 km s^-1 Mpc^-1, Ω_Λ = 0.7, and Ω_M = 0.3 ().§ SAMPLE SELECTION AND DATA REDUCTION To study the X-ray properties of the population of AGN at high-redshift (z > 5.5) we started from the most up-to-date compilation of 198 luminous high-redshift quasars (181 from& ; eight from ; nine from ) and cross-correlated it with all the available archival data from Chandra, XMM-Newton, and Swift-XRT. The majority of these 198 AGN were spectroscopically identified with optical and NIR surveys and are classified as Type 1 AGN. From the cross-correlation we found that 29 sources have archival X-ray observations: 21 QSOs have been observed by Chandra, while 12 have XMM-Newton observations; J0100+2802, J1030+0524, J1120+0641, J1148+5253 and J1148+5251 were observed by both telescopes. One additional source has been observed by Swift-XRT with a relatively deep exposure. We also note that a further ten objects fall within Swift-XRT fields observed for only ∼5 ks each. We did not consider them in this work as no useful constraints can be derived on their X-ray properties. None of these 29 sources come from either the Chandra Deep Field North () or South () or from the COSMOS survey (). These three deep fields have no sources with spectroscopic redshift above 5.5 (seeandfor the Chandra Deep Fields, andfor the COSMOS survey). More generally, there are no X-ray selected sources with spectroscopic redshift > 5.5.We reprocessed all the 21 Chandra sources using the Chandra software CIAO v. 4.8 with faint or vfaint mode for the event telemetry format according to the corresponding observation. Data analysis was carried out using only the events with ASCA grades 0, 2, 3, 4 and 6. We extracted the number of counts from circular regions centered at the optical position of every source. We used a radius of 2", corresponding to 95% of the encircled energy fraction (EEF) at 1.5 keV for the on-axis cases (θ < 1'), and of 10" for the off-axis extractions, corresponding to at least 90% of the EEF at 1.5 keV. Fifteen of the 21 Chandra QSOs were the targets of the X-ray observations, while the other six were serendipitously observed at large off-axis angles (θ > 1'). The background spectra were extracted from adjacent circular regions, free of sources, with an area ten times larger. In order to assess if a source could be considered detected in the X-ray band we computed the Poisson probability (P_P) of reproducing a number of counts equal to or above the value extracted in the source region (in the 0.5-7.0 keV energy range) given the background counts expected in the source area. We considered as detected those sources showing a detection probability of > 99.7% (> 3σ).We found that the 15 on-axis QSOs are detected (P_P > 3σ) in the 0.5-7.0 keV X-ray band. One of the six off-axis sources (RD J1148+5253) is also detected in the X-ray band with low-statistics (∼3 counts; see 3.2 offor detailed investigation of the detection significance) so, in the end, we found that 16 out of 21 sources (including J1148+5253) are detected.The XMM EPIC data were processed using the Science Analysis Software (SAS v. 15) and filtered for high-background time intervals; for each observation and camera we extracted the 10-12 keV light curves and filtered out the time intervals where the light curve was 3σ above the mean. For the scientific analysis we considered only events corresponding to patterns 0-12 and patterns 0-4 for the MOS1/2 and pn, respectively. Because of the higher background level of XMM, we extracted the counts from circular regions centered at the optical position of the QSOs with radius of 10" for on-axis sources, corresponding to 55% of EEF at 1.5 keV, and of 30" for off-axis positions, corresponding to at least 40% of the EEF at 1.5 keV. The background was extracted using the same approach adopted for Chandra data.We then computed the Poisson detection probability, as we did for the Chandra quasars, for all the sources. In this case we found that the five on-axis sources (the targets of the corresponding observations) were detected in the X-ray band at > 3σ, while seven sources were observed with large off-axis angles and are undetected in the X-ray band (they are serendipitously observed).For the source observed by Swift-XRT, data reduction and spectrum extraction were performed using the standard software (HEADAS software v. 6.18)and following the procedures described in the instrument user guide.[http://heasarc.nasa.gov/docs/swift/analysis/documentation] Given the limited number of photons, in order to optimize the ratio between signal and background we restricted our analysis to a circular region of 10" radius, including ∼50% of the flux according to the instrumental point spread function (PSF) full width half maximum (FWHM) (). The ancillary response file (ARF) has been calculated accordingly by the xrtmkarf task.In Table 1 we report all the information linked to the X-ray observations of the 29 QSOs. We show in Figure 1 the redshift distribution of all the 198 QSOs known at z > 5.5 (black histogram) and the distribution of those observed in the X-rays (red shaded histogram). The blue shaded histogram shows the redshift distribution of the 18 QSOs detected.We display the X-ray cutouts of the 18 detected sources in Figure 2. § X-RAY ANALYSIS OF THE SAMPLE In Table 2 we present the number of counts in the total (0.5-7.0 keV), soft (0.5-2.0 keV), and hard (2.0-7.0 keV) bands for all the sources; for the undetected QSOs we provide upper limits to the number of counts at the 3σ confidence level. For Chandra sources, these upper limits were computed using the srcflux tool of CIAO, that extracts source and background counts from a circular region, centered at the source position, that contains 90% of the PSF at 1 keV. For the XMM undetected sources we used the sosta command of the XIMAGE software, extracting source and background counts from circular regions with radius r=10" and r=30", respectively. Table 2 also includes the hardness ratio (HR), computed as HR = H-S/H+S where H and S are the net counts in the hard (2-7 keV) and soft (0.5-2.0 keV) bands, respectively. In Figure 3 we report the redshift distribution of the net counts from all sources; upper limits correspond to undetected QSOs. It is evident that the majority of the detected sources have < 30 net counts. For the 12 sources with > 10 counts we attempted an X-ray spectral fit, while we use the tool PIMMS for those QSOs detected with < 10 net counts and those undetected in order to derive the basic X-ray properties. We "grouped" the spectra ensuring a minimum of one count for each bin, and the best fit was calculated using the Cash statistic[With a binning of one count for each bin the empty channels are avoided and so the C-stat value is independent of the number of counts. Consequently, the distribution of the C-stat/d.o.f. is centered at ∼1. See Appendix A of <cit.>.], except for J1306+0356 for which we used a grouping of 20 counts per bin and the χ^2 statistic because of its large number of net counts (∼125).We modeled these spectra with an absorbed power-law, using XSPEC v. 12.9 (). The absorption term takes into account both the Galactic absorption (shown in Table 1) and the source intrinsic obscuration. In the fit we fixed the value of the photon index to Γ=1.9, which is a typical value found for Type 1 AGN at lower redshift (e.g., ). We list in Table 3 the basic parameters derived from spectral fits. Errors are reported at 90% confidence level if not specified otherwise. We also fit the five spectra of the sources with highest counting statistics (> 30 net counts) using the same model described above but with Γ free to vary. We present these results in the next sub-section. §.§ Analysis of the five QSOs with the best photon statistics In this section we show the results obtained from our analysis of the five quasars with the best counting statistics (> 30 net counts): J0100+2802, J1030+0524, J1120+0641, J1148+5253 and J1306+0356. In all the fit models we included a Galactic-absorption component, which was kept fixed during the fit.SDSS J1306+0356 (z = 6.02). This is the only quasar detected by Chandra with more than 100 net counts in the 0.5-7.0 keV band. The target was observed in two different periods and has two different data-sets. In order to improve the fit quality we combined the two data-sets obtaining a spectrum with ∼125 net counts. In the fit we used a grouping of 20 counts per bin in order to use the χ^2 statistic and we were able to fit its spectrum with a model in which the photon index Γ was left free to vary. We fit the spectrum with a power-law model at the redshift of the quasar. The spectrum and its best-fit model and residuals are shown in Figure 4 (a). Throughout the paper, residuals are in terms of sigmas with error bars of size one. In the case of the Cash statistic, they are defined as the (data-model)/error, where error is calculated as the square root of the model predicted number of counts. The best fit photon index is Γ = 1.72_-0.52^+0.53 with χ^2 = 2.2 for 3 degrees of freedom. Such a value of Γ is consistent with the others found for luminous AGN at lower redshift (1 ≤ z ≤ 5.5; e.g., ; ; ). We then added an absorption component at the redshift of the quasar and obtained an upper limit of N_H < 2.2 · 10^23 cm^-2.SDSS J1030+0524 (z = 6.31). This quasar was observed by both Chandra and XMM. The short Chandra exposure detected this source with ∼6 net counts, while the much longer XMM observation (see Table 1) detected this source with ∼148 net counts. We used a grouping of 1 count for each bin for all spectra of the three cameras and we fit the three EPIC spectra (pn, MOS1 and MOS2) with a power-law model and an intrinsic-absorption component at the redshift of the quasar. The spectrum and its best-fit model and residuals are shown in Figure 4 (b). The best-fit photon index is Γ = 2.39_-0.46^+0.55 with C-stat = 21.6 for 18 degrees of freedom. This value of Γ is consistent with the one found by Farrah et al. (2004; Γ = 2.27_-0.31^+0.31). We also found an upper limit to the column density N_H < 1.9 · 10^23 cm^-2.SDSS J1148+5251 (z = 6.42). This quasar was observed by both observatories; Chandra detected this source with ∼37 net counts thus allowing us to fit its data.We used a grouping of 1 count for each bin, and we fit the spectrum with a simple power-law model. The spectrum and its best-fit model and residuals are shown in Figure 4 (c). The best-fit photon index is Γ = 1.59_-0.57^+0.61 with C-stat = 20.9 for 33 degrees of freedom. This value of Γ is consistent with the one found by Gallerani et al. (2017; Γ = 1.6_-0.49^+0.49). SDSS J0100+2802 (z = 6.30). This is the latest quasar observed by both Chandra and XMM. The Chandra exposure detected this source with ∼12 net counts, while a total of ∼155 net counts were collected by XMM. Fitting a power-law to the Chandra spectrum, we obtained Γ = 3.0_-0.8^+1.2 which is consistent with the one found by Ai et al. (2016; Γ = 3.03_-0.70^+0.78), but this measurement is uncertain with very large errors.For the XMM spectrum, we used a grouping of 1 count for each bin for all spectra of the three cameras and we fit the three EPIC spectra (pn, MOS1 and MOS2) with a power-law model and an intrinsic-absorption component at the redshift of the quasar. The spectrum and its best-fit model and residuals are shown in Figure 4 (d). The best-fit photon index is Γ = 2.33_-0.29^+0.32 with C-stat = 233.5 for 254 degrees of freedom. This value of Γ is consistent with the one found by Ai et al. (2016) and the one we derived from the Chandra analysis, but is less uncertain. We also found an upper limit to the column density N_H < 2.1 · 10^23 cm^-2.ULAS J1120+0641 (z = 7.08). This is another quasar observed by both Chandra and XMM (which observed it in three different orbits). We summed together the six MOS and the three pn spectra and then we summed the two combined MOS, so as to increase the fit quality, and used a binning of 1 count per bin. Chandra detected this source with ∼6 net counts and we were not able to fit its data, while XMM detected this source with ∼34 net counts. We fit the two EPIC spectra with a power-law model and compared our result with those available in the literature (Moretti et al. 2014; Page et al. 2014). The spectrum and its best-fit model and residuals are shown in Figure 4 (e). The best-fit photon index is Γ = 2.24_-0.48^+0.55 with C-stat = 391.1 for 364 degrees of freedom. Such a value of Γ is half way between those found by Page et al. (2014; Γ = 2.64_-0.54^+0.61) and Moretti et al. (2014; Γ = 1.98_-0.43^+0.46) and consistent with both of them within the errors. § MEAN X-RAY PROPERTIES OF THE MOST DISTANT QUASARS Obtaining accurate values of the X-ray spectral properties, such as the power-law photon index and the intrinsic absorption column density, for most individual sources in this work is hindered by the small numbers of detected photons. To date, only five QSOs at z > 5.5 (the five presented in 3.1) have sufficient counting statistics that allow accurate measurements of the X-ray spectral properties (Farrah et al. 2004; Moretti et al. 2014; Page et al. 2014; ). Our knowledge of the X-ray spectral properties of quasars at z > 5.5 therefore relies mainly on the joint spectral fitting of samples of these sources (Vignali et al. 2005; Shemmer et al. 2006; Just et al. 2007). We first selected the 16 quasars detected by Chandra and made a joint spectral fitting analysis using 15 QSOs, excluding the data-set with ∼100 net counts of J1306+0356 (but keeping its data-set with more limited statistics) and the spectrum of J1148+5251 due to their relatively high statistics (> 30 net counts).In all fits we used the Cash statistic and the errors are reported at the 90% confidence level. We fit these 15 spectra with a power-law model and associated its value of redshift and Galactic absorption to each source. We found a mean photon index Γ = 1.93_-0.29^+0.30 (C-stat = 223.1 for 151 d.o.f.), which is consistent with those found in previous works. As a further test, we stacked all the Chandra spectra from the detected sources with similar redshift, obtaining two combined spectra, one from sources with 5.7 ≤ z ≤ 6.1 (10 QSOs) and one from sources with 6.2 ≤ z ≤ 6.5 (5 QSOs), excluding the spectrum of J1120+0641 from the sum, because of its very high redshift, and the data-set of J1306+0356 with a high number of counts. This separation into two redshift bins limits errors caused by summing spectral channels that correspond to different rest-frame energies.The lower-redshift stack has an average redshift of z = 5.92 and 130 net counts. The one at higher redshift has an average redshift of z = 6.30 and 66 net counts. We used XSPEC to fit the two spectra with a simple power-law[In these cases we included a Galactic absorption component, which was kept fixed at a mean N_H value during the fit.] and derived a mean photon index of Γ = 1.92_-0.27^+0.28 (C-stat = 48.3 for 91 d.o.f.) for the lower redshift spectrum (Figure 5, left). This value is consistent with the mean photon indices obtained by jointly fitting spectra of luminous and unobscured quasars at lower redshift (1 ≤ z ≤ 5.5; e.g., Vignali et al. 2005; Shemmer et al. 2006; Just et al. 2007) and is also consistent with the values predicted by theory (the power-law spectrum is produced by inverse Compton processes caused by interaction of hot-corona electrons with optical/UV photons from the accretion disk; typical values are Γ∼ 1.8 - 2.1; , 1993). In Figure 5 (right panel) we report the mean photon indices for QSO samples at different redshifts derived from joint fitting or stacking analysis. We did not find any significant evolution of the AGN photon index with redshift up to z ∼ 6 and the only two values measured at higher redshift (J1030+0524 at z = 6.31 by Farrah et al. 2004 and J1120+0641 at z = 7.08 by Moretti et al. 2014) are consistent with this non evolutionary trend.We note that, at z ∼ 6, we are sampling rest frame energies in the range 3.5-49 keV. In this band, a hardening of AGN spectra is often observed because of the so called "Compton-reflection hump", that is, radiation from the hot corona that is reprocessed by the accretion disk, which peaks at ∼30 keV. However, the mean photon index we derived does not differ from the typical value of Type 1 AGN, suggesting that the presence of the Compton-reflection component is not significant in our sample, as indeed is observed for luminous QSOs (e.g., ; ). The individual photon indices we derived in 3.1 for the five sources with > 30 net counts are also consistent with typical values of luminous unobscured QSOs, again suggesting negligible Compton reflection. For the higher-redshift spectrum we obtained a photon index with poorer constraints than the previous one, Γ = 1.73_-0.40^+0.43 (C-stat = 51.0 for 55 d.o.f.), because of the smaller number of counts. This spectrum is characterized by a flatter power-law slope due to the presence of J1148+5251, that has a flatter photon index (see Gallerani et al. 2017). However, this value is still consistent, within the errors, with those present in the literature.Then we fit the two spectra with an absorbed power-law model and Γ frozen to 1.9. We found that N_H ≤ 8.9· 10^22 cm^-2 for the former spectrum and N_H ≤ 5.0· 10^23 cm^-2 for the latter spectrum. The limits on the mean column densities are consistent with the values found in the literature and indicate that the population of z > 5.5 luminous QSOs is not significantly obscured, as expected according to their optical and NIR classification.Finally, we combined all the spectra used in the two stacking analyses, excluding J1148+5251, and fit them with a power-law model, obtaining a spectrum with 157 net counts and with Γ = 1.83_-0.24^+0.25 (C-stat = 62.7 for 108 d.o.f.), fully consistent with the values previously reported.§ X-RAY AND OPTICAL PROPERTIES OF THE SAMPLE In Table 3 we provide all the X-ray properties we derived as well as all the optical information available in the literature for our sample. The details of the Table columns are provided below.Column (1). - The name of the quasar taken from Bañados (2015) and Bañados et al. (2016).Column (2). - The monochromatic apparent AB magnitude at the rest-frame wavelength λ = 1450 Åtaken from Bañados (2015).Column (3). - The absolute magnitude at the rest-frame wavelength λ = 1450 Åand computed from m_1450.Column (4). - The 2500 Årest-frame luminosity, computed from the magnitudes in column (2), assuming a UV-optical power-law slope of α = -0.5 (e.g., Shemmer et al. 2006; Just et al. 2007).Column (5). - The Galactic absorption-corrected flux in the observed-frame 0.5-2.0 keV band. Fluxes were computed using XSPEC for detected sources with > 10 net counts and using PIMMS[For each Chandra observation we set the response to that of the corresponding observing Cycle to account for the effective area degradation.] for QSOs with < 10 net counts and for those undetected (assuming a power-law with Γ = 1.9). Upper limits are at the 3σ level.Column (6). - The luminosity in the rest-frame 2-10 keV band.Column (7). - The optical-X-ray power-law slope defined asα_ox = log(f_2 keV/f_2500Å)/log(ν_2 keV/ν_2500Å),where f_2 keV and f_2500Å are the flux densities at rest-frame 2 keV and 2500 Å, respectively. The errors on α_ox were computed following the numerical method described in 1.7.3 of , taking into account the uncertainties in the X-ray counts and an uncertainty of 10% in the 2500 Åflux corresponding to a mean z-magnitude error of ∼0.1.Column (8). - Upper limits on the column density derived from the spectral fitting for sources with > 10 net counts with a power-law model with Γ frozen to 1.9.In Figure 6 we report the 0.5-2.0 keV flux versus apparent magnitude at 1450 Å.§.§ Source variability The five sources with the highest statistics (3.1) have been observed with Chandra and XMM in different years, so we checked if these QSOs have varied their X-ray fluxes over the passing of time. J0100+2802, J1030+0524, J1120+0641 and J1148+5251 were observed and detected by both X-ray observatories, and we computed the variability significance using the fluxes reported in Table 3, while RD J1148+5253 is detected only by Chandra. For this source the upper limit on the flux derived from XMM data is above the flux value derived from Chandra (see Table 3), so there is no clear evidence of variability. Also J1306+0356 was observed at two different epochs by Chandra so, in this case, we computed the variability significance using the fluxes derived from the spectral fit of the two data-sets (f_0.5-2 keV=2.7_-0.3^+0.4 and f_0.5-2 keV=4.5_-0.5^+1.0 in units of 10^-15 erg cm^-2 s^-1). All the computed significances are below the 2σ level, so there is no clear evidence of flux variability in these five sources. These results are consistent with those found for lower redshift sources (4.10 ≤ z ≤ 4.35), with comparable X-ray luminosities, and strengthen the idea that the X-ray variability does not increase with redshift (Shemmer et al. in prep.). §.§ Multi-band information from the literature QSOs with peculiar multi-band emission properties could be characterized by different emission or accretion processes that can also affect their X-ray spectra. For example,radio-loud AGN usually have X-ray spectra flatter than radio-quiet QSOs, because of the contribution from the jet (e.g., ).Thus, we checked if there are any peculiar QSOs in our sample that also have peculiar X-ray properties linked to their different nature. First, we checked the VLA FIRST catalog () and the literature to derive information about the radio loudness (RL) of our sources, adopting the definition by <cit.>: RL=f_ν,5 GHz/f_ν,4400 Å, where f_ν,5 GHz is the 5 GHz radio rest-frame flux density and f_ν,4400 Å is the 4400 Å optical rest-frame flux density, and a quasar is considered radio loud if RL > 10. Assuming an average optical spectral index of α = -0.5, we extrapolated the optical rest-frame flux density at 4400 Å from the WISE W_1 (λ∼ 3.4 μ m) magnitude, when available, or from m_1450 otherwise. Twenty-five sources have upper limits on their radio fluxes, two have not been observed by VLA (J328.7339-09.5076 and J025.6821-33.4627) and two (J083643.8+005453.2 and J010013.0+280225.9) are detected with a RL ∼12 and ∼0.3, respectively. The first value is consistent with the one derived by <cit.> and indicates a moderate level of radio emission that is not supposed to significantly affect its X-ray spectrum (but see ). The two VLA-unobserved sources are also not observed by NVSS. Summarizing, from the values of the derived RL parameters, we found that there are no clear indications of the presence of extreme Radio Loud QSOs in our sample. We also checked in the literature for the presence of any Broad Absorption Line (BALQ), Weak-Line (WLQ) or Weak Infrared QSOs (sources with a weak emission at ∼10 μ m rest-frame due to a possible lack of torus emission component; ). In our sample five QSOs are classified as BALQs, six as WLQs and two as Weak-IR QSOs (see Table 1). WLQs are defined as quasars having rest-frame equivalent widths (EWs) of < 15.4 Å for the Lyα+N V emission-line complex (). This could be due to either an extremely high accretion rate, that may result in a relatively narrow UV-peaked SED () in which prominent high-ionization emission lines are suppressed (the so called Baldwin effect; ), or a significant deficit of line-emitting gas in the broad-emission line region (). In our case the WLQ X-ray properties are consistent with those of non WLQs (see Table 3). §.§ Comparison of the optical properties with lower redshift results The optical-X-ray power-law slope (α_ox), defined in Equation 1 in 5, is expected to trace the relative importance of the disk versus corona. Previous works have shown that there is a significant correlation between α_ox and the monochromatic L_2500Å (α_ox decreases as L_2500Å increases; ; ), whereas the apparent dependence of α_ox on redshift can be explained by a selection bias (; Vignali et al. 2003; ; Shemmer et al. 2006; Just et al. 2007; ; but see also ). We further examine the α_ox-L_2500Å relationship adding our sample of 29 high-redshift QSOs to previous measurements of α_ox. We have plotted α_ox versus L_2500Å for all the X-ray quasars of our sample in Figure 7, including 1515 QSOs from lower redshift analyses (X-ray selected: 529 from Lusso et al. 2010, 174 from ; optically selected: 743 from , 11 from , 13 from Vignali et al. 2005, 13 from Shemmer et al. 2006, 32 from ). We excluded eight sources from the original sample of Shemmer et al. (2006) because they are also present in our sample (our results for these eight sources are consistent with those derived by Shemmer et al. 2006), obtaining a final sample of 1544 QSOs. Our sample follows the correlation between α_ox and UV luminosity with no detectable dependence on redshift. We performed linear regression on the data (13 of them have upper limits on α_ox) using the ASURV software package (), confirming and strengthening the finding in previous studies that α_ox decreases with increasing rest-frame UV luminosity. We found the best-fit relation between α_ox and L_2500Å to be:α_ox = (-0.155 ± 0.003)log(L_2500Å)+(3.206 ± 0.103).Errors are reported at the 1σ confidence level. This correlation is based on the highest number of QSOs available. These best-fit parameters are consistent with those derived by Just et al. (2007) and by Lusso et al. (2010). We note that the presence of our and the Shemmer et al. (2006) samples improves coverage at z ≈ 5-6, showing that our analysis supports the idea that luminous AGN SEDs have not significantly evolved out to very high redshift. We also obtained a best-fit relation excluding the X-ray selected data ( and ) and found that is consistent with Equation 2. § SUMMARY AND CONCLUSIONS We made a complete and uniform study of the X-ray properties of the most-distant quasars at z > 5.5. This is the most up-to-date analysis of the X-ray properties of early AGN. Our main results are the following: * We started from a parent sample of 198 spectroscopically confirmed QSOs at z > 5.5 and considered the 29 objects that have been observed by Chandra, XMM-Newton, and Swift-XRT. Eighteen of them are detected in the X-ray band (0.5-7.0 keV). * Five sources have sufficient counting statistics (> 30 net counts) to allow us to fit their spectra with a power-law model with Γ free to vary. For these quasars we obtained values of the photon index Γ∼ 1.6-2.4 consistent with those present in literature (Farrah et al. 2004; Moretti et al. 2014; Gallerani et al. 2016) and those expected from theory (Haardt & Maraschi 1993). * By performing a spectral stacking analysis we derived the mean photon index of the early AGN population. We divided our 15 Chandra detected sources into two redshift bins: 5.7 ≤ z ≤ 6.1 (10 sources) and 6.2 ≤ z ≤ 6.5 (5 sources). We obtain Γ = 1.92_-0.27^+0.28 for the first stacked sub-sample and Γ = 1.73_-0.40^+0.43 for the second one. We do not find a significant change in Γ with cosmic time over the redshift range z ≈ 1.0-6.4. This means that, similarly to optical properties (e.g., Mortlock et al. 2011; Barnett et al. 2013), also the X-ray spectral properties of luminous QSOs do not significantly evolve over cosmic time. The upper limits to the mean column density derived from the stacking analysis are N_H < 8.9 · 10^22 cm^-2 for the first sub-sample and N_H < 5.0 · 10^23 cm^-2 for the second one, showing that these luminous high-redshift QSOs are not significantly obscured, as expected from their optical classification as Type 1 AGN. * Combining our sample with literature works, we confirmed that, by using a statistically larger sample, the α_ox parameter depends on UV monochromatic luminosity. The X-ray-to-optical flux ratios of luminous AGN have not significantly evolved up to z ∼ 6. We acknowledge financial contribution from the agreement ASI-INAF I/037/12/0. W. N. B. thanks Chandra X-ray Center grant GO5-16089X and the Willaman Endowment for support. We thank G. Risaliti and E. Lusso for useful discussions. aa
http://arxiv.org/abs/1704.08693v1
{ "authors": [ "R. Nanni", "C. Vignali", "R. Gilli", "A. Moretti", "W. N. Brandt" ], "categories": [ "astro-ph.GA" ], "primary_category": "astro-ph.GA", "published": "20170427180001", "title": "The X-ray properties of z$\\sim$6 luminous quasars" }
SOC and marginality in Ising spin glasses]Self-organized critical behavior and marginality in Ising spin glasses ^1 Department of Physics, Indian Institute of Science Education and Research, Bhopal, India ^2 Department of Physics, Konkuk University, Seoul 143-701, Korea ^3 School of Physics and Astronomy, University of Manchester, Manchester M13 9PL, UK We have studied numerically the states reached in a quench from various temperaturesin theone-dimensionalfully-connected Kotliar, Anderson and SteinIsing spin glass model. This is a model where there are long-range interactions between the spins which falls off as a power σ of their separation. We have made a detailed study in particular of the energiesof the states reached in aquench from infinite temperature and theiroverlaps, including the spin glass susceptibility. In theregime where σ≤ 1/2, where the model is similar to the Sherrington-Kirkpatrick model, we find that the spin glass susceptibility diverges logarithmically with increasing N, the number of spins in the system, whereas for σ> 1/2 it remains finite. We attribute the behavior for σ≤ 1/2 to self-organized critical behavior, where the system after the quench is close to the transition between states which have trivial overlaps and those with the non-trivial overlaps associated with replica symmetry breaking. We have also found by studying the distribution of local fields thatthe states reached in the quench have marginal stability but onlywhenσ≤ 1/2. Keywords: spin glasses, self-organized criticality, marginal stability [ Auditya Sharma^1, Joonhyun Yeo^2 and M A Moore^3====================================================§ INTRODUCTION In thispaper we havestudied three topics relatedto deterministic quenches in a spin glass system. Thespin glass system is that of the one-dimensionallong-range IsingHamiltonian introducedby Kotliar, Anderson and Stein (KAS) <cit.>, which serves as a proxy model for the short-range d-dimensional Edwards-Anderson model <cit.>. TheKAS modelhas long-rangeinteractions between the spinswhich fall offwith apower σ oftheir separation distance. Ina quenchwe startfrom an initialstate, suchas the fullyequilibriated stateatatemperature Tandthen applya deterministicalgorithm,suchas the“greedy”,“polite”or sequentialalgorithm <cit.>untila stateis reachedin which the energycannot be lowered further by flippingjust a single spin. Much of our investigation has been of the case where the initial state is at infinite temperature so that spins are randomly ± 1 withthe quench being performedwith the sequential algorithm.The first study isof the nature of the statereached in the quench, as revealedby the formof theParisi overlap functionP(q). Our findinghere isthat itsform isdetermined bythe natureof the initial stateat temperatureT. IfT> T_c,where T_cis the equilibrium transition temperature of thespin glass system, then the finalstatehasthetrivial overlapoftheparamagneticstate, P(q)=δ(q). WhenT< T_c itsform after thequench resembles that of the initial state. Weconclude that in a deterministic quench, theform ofthe initialstate isimprinted ontothe finalquenched state.The second study which we make is of the distribution of local fields p(h)in thequenched state.We reviewthe argumentof Anderson, reported inRef. <cit.>,for the formof p(h)at small fields and findthat our numerical data is consistentwith the state generatedin thequenchhaving marginalstability inthe regime where mean-fieldapplies, that is for σ≤ 1/2, but that outside this regime the quenched state is not marginal. The mean-field limit includes the Sherrington-Kirkpatrick(SK) model which correspondsto thecase σ= 0.The formof p(h)after a quench has beenmuch studied for the IsingSK model <cit.>.The thirdtopic studied isthat of Self-OrganizedCriticality (SOC) which is the phenomenon where some large dissipative systems can be in a scale-invariant critical state but without any parameter being tuned to a critical value <cit.>. It is believed that it is behind thefractal features<cit.>associated withmany phenomena, such asearthquakes, the meandering of seacoasts and the structureofgalacticclusters. Forequilibriumsystems,scale invariant behavior is usually only found at critical points where some parameter e.g. temperature is atits critical valueT_c. However, over the years a number of examples have been found of SOC behavior in modelswhich havevery artificialdynamicalrules suchas inthe sandpile model<cit.> and the forestfire model <cit.>. Morerecently, Andresenet al. <cit.> have found SOCfeatures in the Sherrington-Kirkpatrick(SK) model of Isingspin glasseswhich wereabsentin thed-dimensional Edwards-Anderson (EA) spin glass models. The signature of SOC behavior for them was the size of the spin avalanches following a change in the applied field; only when there wasa diverging number of neighbors as in theSK modelwere theavalanche sizes limitedby thenumber of spins N in the system. Most studies of SOC behavior focus on dynamical features such as thesizeof avalanchesetc.<cit.>. Inthis paper we have studiedentirely staticfeatures: in particularthe spin glasssusceptibilitycalculated viatheoverlapsof thequenched states obtained from different initialstates.We have found that it diverges logarithmically withthe number of spins Nin the system, provided thatthe exponent σ≤ 1/2.We thusconclude that when σ≤ 1/2the systemreaches aset ofquenched states whichareclose toacriticalenergy.Whenσ >1/2the divergence of the susceptibility goes away, indicating that the quench doesnot thentake thesystem toa criticalstate. Itis thought <cit.> that systemswith σ≤ 1/2 behavejust as the SKmodel. Ourresultsthereforecomplement thosein Ref. <cit.>, wherethey found that SOCbehavior was only present forthe SK model, butwas lacking in thed-dimensional EA models, which correspond to values ofσ > 1/2 in the KAS model <cit.>. Furthermore, GonçalvesandBoettcher<cit.>studiedavalanche sizes asa function of σin the KAS modeland concluded that σ=1/2wasindeedtheborderlinevalueabovewhichthe avalanches changed their behavior as a function of system size N. An extensivestudy ofavalanchesin theSK modelitselfis inRefs. <cit.>. Our interest inthe static aspects of SOC behaviorwas triggered byour previous studies <cit.>of vector spin glassesin the Sherrington-Kirkpatrick(SK) model. We foundthat the quenchin those models reached metastable minima whose energy per spin E_cwasvery closeto thatcalculated forthe energywhich separatesminima with zero overlap with each other from those which have a fullreplicasymmetry overlapwith eachother <cit.>. In otherwordsthequench takesoneclosetothe criticalenergywhichseparates states with a trivialP(q) from those with a non-trivialP(q). Thesame type ofmean-field calculation fails inthe Isingcase asthe statesreached ina quench arequite atypicalof theset of allthemetastable statesof energyE.Inthethermodynamic limit the energy per spin reached after the quench froma randominitial state tendsto a well-defined limit,dependent onthe method used to flip the spins (e.g. polite' or greedy' orsequential' algorithm etc. <cit.>).These observationsareconsistentwith therigorousargumentsof NewmanandStein<cit.>. The statesreached inthequench haveadistribution p(h)of their localfields h_i= ∑_jJ_ij S_jwith interactions J_ij among thespins S_i, which is linear inh at small fields, whereas for the totality of metastable states ofenergy E, p(0) is finite <cit.>. Ourmain findingis thatfor Isingspin glassesin theSK region σ≤ 1/2,the energy of the system afterthe quench is close to the critical energy E_cwhich separates the metastable states of the kind produced in the quenchwhich have no overlap with each other from thosewhich would existat lowerenergy which wouldhave full replica symmetrybreaking overlaps. Thatis Ising spinglasses with σ≤ 1/2 behave verysimilarly to vector spinglasses, except that forIsing spin glasses thedefinition of E_c isnot that for the set ofall states of energy Eas for the caseof vector spin glassesbutinstead itisthecriticalenergy forthosestates producedin thequench (whichhave adistribution oflocal fields p(h) ∼h at smallh). In Ref. <cit.> the natureof theordering associatedwith theSOC behaviorwas not specified. If thequench isclose tothis criticalenergy one would expectthere to bea divergent spin glasssusceptibility; the definition andthe study ofthis susceptibilityis one ofthe main topics ofthis paper. Itis a purelystatic quantity: thestudy of avalanchesalone doesnot provideinsights intothe natureof the incipient ordering associated with the SOC. Thereis animportantdistinction betweenIsingand vectorspinglasses. Edwards hypothesized(for areview see<cit.>)that systems likepowders or sand piles etc. couldbe understood notby solving the full dynamics ofthe system from its initial state toits finalresting state (whichis hard) but insteadby determiningforthese systemstheanalogue ofthe numberofstates inspinglasses in which the spins are parallel to their local fields, (whichis easy) <cit.>.For vector spin glasses in the SK limit, hishypothesis hasutility.It fails completelyfor modelling quenchesinthe IsingSKspin glassasitis onlybya fulldynamicaltreatmentthat onecanobtaina p(h)whichislinear inh<cit.>.In Sec. <ref> we introduce the KAS model. In Sec. <ref> we investigate how the quenched state depends on the initial state. In Sec. <ref> we study the distribution of the local fields p(h) of the quenched state, and from its form deduce that marginality only exists when σ≤ 1/2.The existence of SOC behavior is deduced from a study ofthe N dependence of the spin glass susceptibility and the energy of the quenched state in Sec. <ref>. § THE MODELThe Kotliar, Anderson and Stein (KAS) <cit.> Hamiltonian is ℋ = -∑_⟨ i, j ⟩ J_ij S_i S_j,where the Ising spins S_i (i = 1, 2, ⋯, N), taking values ± 1, are arranged in a circle of perimeter N.The geometric distance between sites i and j is r_ij=N/πsin[π/N(i-j)], the length of the chord between the sites i,j. The interactionsJ_ij are long-ranged and depend on the distance r_ij as J_ij=c(σ,N) ε_ij/r_ij^σ, where ε_ij is a Gaussian random variable of mean zero and unit variance. The coefficient c(σ,N) is chosen to make the mean-field transition temperature T_c^ MF equal to unity for all values of σ: [ T^ MF_ SG(c) ]^2 =1/N∑_i ≠ j[J_ij^2]_ av = c(σ,N)^2∑_j =2^N1/r^2σ_1j=1. The sum over j can be done for large N and gives1/c(σ, N)^2=2 ζ[2 σ]+Γ[1/2-σ]^2/2^2 σΓ[1-2 σ](N/π)^1-2 σ + O(1/N^2).Thus when σ < 1/2, c(σ, N) ∼ [1/N^1/2-σ](1+ O(1/N^1-2 σ)), while for σ > 1/2, c(σ,N) ∼ (1+ O(1/N^2 σ-1)). We have found when studying the energy per spinE reached in the quench that there is a finite size correction for σ > 1/2 of O(1/N^2σ-1), whose origin is that E ∝ c(σ,N). Such corrections to scaling are large, especially when σ is close to 1/2. Note that this correction to scaling is associatedwith the zero temperature fixed point, rather than the critical fixed point. It is only the discovery of this form for the leading correction to scaling that has enabled us to analyze our data. There is a mapping between σ and an effective dimensionality d_eff of the EA model <cit.>. For 1/2 < σ < 2/3, it is d_eff=2/(2 σ-1); thus σ =2/3 corresponds to an effective dimensionality of 6.We generated one spin flip stable states by a quenching procedure that involves repeatedly flipping spins to orient them with their local fields, according to the sequential (as opposed to the `greedy' or `polite') algorithm. Previous work <cit.> suggests that although a difference in the energy of the final state can be seen based on the precise algorithm employed, the nature of the final state is independent of the algorithm. In thesequential algorithm used here, sites are scanned sequentially from 1 through N, and at each of them the spin is aligned to its local field, thus monotonically reducing the energy of the system. When a spin is flipped the local fields h_i are immediately updated.The protocol of repeatedly aligning spins is carried out untilconvergence is obtained. The initial state was either a random spin state, which corresponds to infinite temperature, or one of thespin configurations of an equilibrated system at temperature T. § DEPENDENCE OF THE PARISI OVERLAP P(Q) ON THE INITIAL STATEThe overlap between two minima A and B obtained after a quench is defined asq ≡1/N∑_iS_i^A S_i^B.Its distribution P(q) contains crucial information about the nature of the final state reached by the quench. In Fig. <ref> we have plotted the sample averaged distribution function P(q)for systems of N =256 spins obtained in quenches from various temperatures T. For a quench which starts at a temperature T > T_c, the resulting P(q) is trivial, in that it reduces in the large N limit to P(q)=δ(q) <cit.>. Such a form indicates that the states obtained in the quench are completely uncorrelated from each other. However, for quenches which start from a temperature T< T_c a non-trivial P(q) was found. One would expect that the P(q) obtained from a quench starting from T<T_c has the replica symmetry breaking or replica symmetry features expected for that σ value, whatever that might be. For σ >1 T_c is expected to be zero, and our results at σ =1.5, shownin the third panel, are consistent with the final state always being that expected from a quench which starts in the paramagnetic region.§ MARGINALITY AND THE DISTRIBUTION OF LOCAL FIELDS P(H)In this section we discuss the distribution of local fields h_i after the quench from infinite temperature. The magnitude of h_i after the quench is given by h_i = S_i ∑_j J_ij S_j,where S_i are the spins at the end of the quench. Notice that h_i>0.What will interest us mostly is whether the form of p(h) provides any evidence for marginality, in the sense that the state which is reached is just on the edge of stability <cit.>. Our conclusion will be that the quenched state has indeed marginal stability if σ≤ 1/2, but not if σ >1/2. We shall use the argument of P. W. Anderson (as reported in Ref. <cit.>) to obtain a “bound” on the local field distribution p(h) for small h. For σ < 1/2, it is expected that p(h)=h/H^2 at small fields in the thermodynamic limit (see Fig. <ref>). In the state reached in the quench, relabel the sites in order of their increasing local field h_i and consider the first n of these sites, where 1 ≪ n ≪ N. Suppose one flips all n of the spins at these low-field sites: the consequent energy change isΔ E= 2 ∑_i=1^nh_i-2 ∑_i=1^n∑_j=1^n J_ij S_i S_j.Δ E wouldbe non-negative if the initial state were the ground state. Biroli and Monasson <cit.> gave an argumentthat for the SK model any state (and not just the ground state) is stable against flipping a finite number of spins. Their argument was that Δ E_ij= 2 h_i+2 h_j-2 J_ij S_iS_j which corresponds to flipping just two spins, reduces to 2h_i+2h_j in the large N limit as J_ij goes to zero as 1/N^1/2 when N →∞ for the SK model. But h_i and h_j could themselvesbe of order O(1/√(N)) and excluding this possibility will give us a bound on the value of the coefficient H. The value of h_n can be obtained fromsolvingn =N∫_0^h_ndh h/H^2=Nh_n^2/2 H^2.The first term, Δ E_1 in Eq. (<ref>) is similarlyΔ E_1= 2 ∑_i^nh_i=2 N∫_0^h_ndh h^2/H^2 =2N h_n^3/3 H^2=4 √(2)/3H n √(n/N).For the case σ>1/2, it is expected that marginality <cit.> requires p(h)= 1/H(h/H)^(1/σ)-1. Eq. (<ref>) becomes n =σ N (h_n/H)^1/σ,whileEq. (<ref>) becomes Δ E_1=2 n(n/N)^σH/(σ+1) σ^σ. The second term in Eq. (<ref>) can be re-written as Δ E_2=-2 ∑_i=1^n S_i Δ_i, where Δ_i=∑_j=1^n J_ij S_j. Δ_i is a quantity which on average is zero.Its variance is1/n∑_i=1^nΔ_i^2=1/n∑_i=1^n ∑_j=1^n J_ij S_j ∑_k=1^n J_ik S_k =1/n∑_i=1^n ∑_j=1^n J_ij^2. Suppose now that the positions of the spins S_i, i=1,2, ⋯, n are equally spaced so that R_i =i N/n, then the variance equals (n^2 σ/N^2σ) c(σ,N)^2/c(σ,n)^2 which reduces to n/N when σ <1/2 and to (n/N)^2 σ when σ >1/2.(c(σ,N) was defined in Eq. (<ref>) and we have used its large N and n form). Since S_i and Δ_i are correlated in sign, wehave for σ <1/2, Δ E_2 =-2 n √(n/N). For σ > 1/2, Δ E_2 =-2 n (n/N)^σ.Then the total energy Δ E=Δ E_1 +Δ E_2 becomes for σ <1/2,Δ E=2 n (n/N)^1/2[2√(2)/3 H-1],so the quenched state would be just marginal (i.e. has Δ E =0) ifH= 3/2 √(2).For σ >1/2Δ E= 2 n (n/N)^σ[1/(σ+1) σ ^σ H-1].Thus if the system is just marginal H=(σ+1) σ ^σ. In Fig. <ref> we have plotted p(h)-p(0) (the subtraction of p(0) is to reduce the consequences of the finite size intercept on the y-axis) as a function of h for some σ values less than 1/2. The slope of the red line which is drawn using the value of H which makes the system just marginal agrees quite well with the data for σ < 1/2. In Fig. <ref> we have repeated the exercise for σ =0.7. The green line is the line which would be expected if the system is just marginal. The data points are not close to this expectation at all and indicate that the state reached in the quench is stable(i.e. Δ E >0) by the Anderson criterion. We have examined other σ values which are greater than 0.5 and have found that the size of the discrepancy increases steadily as σ rises above 0.5.Another confirmation that the quenched stateis not marginal for σ >1/2 is provided by Fig. <ref>. An assumption behind marginality is that Eq. (<ref>) shouldhold. If that is the case, then in the large N limit the smallest value of h, h_min should decrease as 1/N^σ (see Eq. (<ref>) with n=1).The results for σ =0.75 in Fig. <ref> indicate that a better fit to the data is as h_min∼ 1/N^0.70. However, the discrepancy is modest for the exponent.The states generated in the quench for σ >1/2 seem to be stable according to the Anderson argument, where one examines the stability against flipping thespins in the first n smallest fields, (see Fig.  <ref>), as H is larger than the just marginal value (σ+1) σ^σ if one determines it from the slope of the data at small h.However, we suspect that they are unstable by the argument of Biroli and Monasson against flips of two spins where the fieldsof the flipped spins are not restricted to be small as in the Anderson argument.In other words, the states generated by the quench are just one spin flip stable states. In the SK limit a state generated by the quench will be a pure state <cit.>, stable against flipping an arbitrary number of spins. In the next section we show that the existence of marginality for σ≤1/2 seems to be associated with self-organized criticality as we can only find that when σ≤1/2. § SELF-ORGANIZED CRITICALITYWe have madea finite size scaling study of the spin-glass susceptibility χ_SG:χ_SG = 1/N∑_i,j[⟨ S_iS_j⟩^2]_av,where the angular brackets represent an average over the metastable minima for a given sample of disorder. The minima in this case were obtainedfrom a random initial state, so that we are studying the case where P(q) is trivial. Note that χ_SG= NVariance(q^2).In the regime 0≤σ≤ 1/2, χ_SG appears to diverge as ln(N), whereas for the region σ>1/2, χ_SG saturates to a finite value at large N, the form of this dependence being well fitted by χ_SG = a-b/N^2σ-1 (see Fig. <ref>).The coefficients a, b appear to approach each other and diverge as σ→ 0.5^+ (see Fig. <ref>). Note that1/2σ-1[1-1/N^2σ-1] →ln (N) in the limit σ→ 0.5. Thus thedivergenceof χ_SG for σ≤ 1/2 as ln N seems to be natural if one takes the Mori argument <cit.> that all systems for σ≤ 1/2 behave in the same way, and just as in the SK limit of σ =0. Consistent with this finding for χ_SG, Fig. <ref> shows that the energy reached by the quench E(N) goes as E_c+ const/ln(N) for σ<0.5, almost independent of σ, at least for σ =0.0, 0.1, 0.2 and 0.3: only the data points for σ =0.4 and 0.5 differ significantly and for them the finite size corrections arevery large. The Mori argument says that in the thermodynamic limit quantities such as the energy should be independent of the value of σ when it is less than 0.5. However,for σ > 1/2 the energyE(N) behaves quite differently and the right panel of Fig.  <ref> shows that it goes as c+d/N^2σ-1, just as could have been anticipated from the N dependence of c(σ,N).We next explain why these results are consistent with SOC behavior for σ≤ 1/2.There is an energy E_cin the large N limit which separatesminimawhich are just at the brink of having a non-trivial form for P(q) from those at higher energy which have trivial overlaps. This has been established for the SK model when the average is taken over all one spin-flip stable states <cit.>.E_c marks the transition to a state with broken replica symmetry. We expect that there will be a similar critical energy for states prepared by quenches from infinite temperature, and that its numerical value will depend on the quench procedure. At E_c massless modes are present: that is near E_c the system has marginal stability. We learned in Sec. III that for σ <1/2 the state reached in the quench had marginal stability so we would expect that the energy reached in the quench is close to E_c. E_c is the analogue of the transition temperature T_c in studies of the thermal spin glass susceptibility <cit.> as the spin glass susceptibility diverges for σ≤1/2 as χ_SG∼ 1/τ, the usual mean-field form, where τ=(1-T_c/T). Our quenches take us close to E_c but miss by an amount of O(1/ln N) due to finite size effects; our analogue of τ is ∼ 1/ln N, so χ_SG∼ln N. This result is also consistent with ourargument by continuity from σ> 1/2in Eq. (<ref>).For quenches from a temperature T> T_c one would expect that the extrapolated energy E_c(T) would be slightly different from that obtained inthe quench from infinite temperature.Fig. <ref> shows that the quench from T_c for the SK model goes indeedto a somewhat lower value of the energy by an amount(∼ 1%) from that at infinite temperature, (and which is incidentally very close to the Parisi type estimates of the true ground state per spin: E_g=-0.763 166 772 65(6)... <cit.>).The quench from T_c is from an initial state where there are long-range correlationswhich are absent for the initial state at infinite temperature, which is probably why their associated values of E_c differ.For σ >1/2 we did not see marginal stability and so the energy reached in the quenched state is probably not close to any critical value below which replica symmetry breaking effects might become visible. For σ > 2/3 we doubteven the existence of any states with broken replica symmetry <cit.>.Because of the absence of marginality it is no surprise really that χ_SG shows no sign of diverging with increasing N.Figure <ref> indicates that it is approaching a finite value as 1/N^2 σ-1.We suspect that the existence of SOC behavior and marginality only for σ≤ 1/2 might be reflected by differences in the nature of spin avalanches for σ above and below 1/2. Horner <cit.> found thatin the SK modelthe number of spin flips per site before the final quenched state was reached increased ≈ln N. However, Andresen and others<cit.> foundavalanches on the scale of the system size N only when z, the number of neighbors of a given siteincreases with N, as happens in the SK model. We would imagine such behavior would extend up to σ =1/2. For σ > 1/2 the effective number of neighbors isfinite (even though the critical behavior remains mean-field like up to σ =2/3; the KAS model with 1/2 <σ <2/3 maps to the nearest-neighbor Edwards-Anderson model in adimension d> 6). Thus the dynamics would be expected to change at σ =1/2 along with the disappearance of SOC behavior and marginality.§ CONCLUSIONSIn the KAS model we have discovered that there is a connectionbetween SOC behaviorand marginality. Because the states reached in the quenches are marginal when σ≤ 1/2, they are near the energy at which the stateshave massless modes i.e. are becoming critical. In this case, the criticality is that associated with the onset of replica symmetry breaking.It would be interesting to know whether in the many systems which are thought to have marginal behavior <cit.>, there is a similar connection with self-organized critical behavior. What is striking about the KAS model is that the transition which is self-organized can be identified;it is the transition to states with correlations between them due to the onset of broken replica symmetry.§ ACKNOWLEDGMENTS We would like to thank Markus Müller for helpful discussions and Juan Carlos Andresen for an initial study ofP(q). AS acknowledges support from the DST-INSPIRE Faculty Award [DST/INSPIRE/04/2014/002461]. Some of the simulations in this project were run on the High Performance Computing (HPC) facility at IISER Bhopal. JY was supported by Basic Science Research Program through the National Research Foundationof Korea (NRF) funded by the Ministry of Education (NRF-2017R1D1A09000527).§ REFERENCES
http://arxiv.org/abs/1704.08700v3
{ "authors": [ "Auditya Sharma", "Joonhyun Yeo", "M. A. Moore" ], "categories": [ "cond-mat.stat-mech", "cond-mat.dis-nn", "cond-mat.soft" ], "primary_category": "cond-mat.stat-mech", "published": "20170427180009", "title": "Self-organized critical behavior and marginality in Ising spin glasses" }
Z Liu et al.Coupled isogeometric FEM/BEM for structural-acoustic analysis 1School of Engineering, University of Glasgow, Glasgow, G12 8QQ, UK 2Department of Engineering, University of Cambridge, Trumpington Street, Cambridge CB2 1PZ, UK School of Engineering, University of Glasgow, Glasgow, G12 8QQ, UK. E-mail: [email protected] We introduce a coupled finite and boundary element formulation for acoustic scattering analysis over thin shell structures. Atriangular Loop subdivision surface discretisation is used for both geometry and analysis fields. The Kirchhoff-Love shell equation is discretised with the finite element method and the Helmholtz equation for the acoustic field with the boundary element method. The use of the boundary element formulation allows the elegant handling of infinite domains and precludes the need for volumetric meshing. In the present work the subdivision control meshes for the shell displacements and the acoustic pressures have the same resolution. The corresponding smooth subdivision basis functions have the C^1 continuity property required for the Kirchhoff-Love formulation and are highly efficient for the acoustic field computations. We verify the proposed isogeometric formulation through a closed-form solution of acoustic scattering over a thin shell sphere. Furthermore, we demonstrate the ability of the proposed approach to handle complex geometries with arbitrary topology that provides an integrated isogeometric design and analysis workflow for coupled structural-acoustic analysis of shells. Isogeometric FEM-BEM coupled structural-acoustic analysis of shells using subdivision surfaces Zhaowei Liu1, Musabbir Majeed2, Fehmi Cirak2, Robert N. Simpson1 December 30, 2023 ==============================================================================================§ INTRODUCTIONStructural-acoustic interaction plays a key role in the design of components and structures in a wide variety of applications where vibrational noise minimisation is a major design requirement, such as inaerospace and automotive engineering, in the design of materials used to control acoustic absorption, or sound radiation from submerged structures, such as submarines and ships. Numerical methods like the finite element method (FEM), finite difference method (FDM) and boundary element method (BEM) play a key role in the design of such structures through simulations of prototype designs. In recent years where advances in manufacturing allow component designs with ever increasing geometric complexity the importance of numerical methods is becoming more apparent.Industrial designers are continually striving for designs that can efficiently deliver improved performance. This necessitates design workflows where computer-aided design (CAD) and analysis methods are tightly integrated. In addition, recent developments into novel materials that exhibit unique physical properties, i.e.metamaterials <cit.>,coupled with the increased fidelity of modern manufacturing processes has opened up new design possibilities, but the lack of design tools that integrate geometric design, analysis and optimisation technologies is widely acknowledged as a key challenge that must be addressed before such materials can be used for practical applications <cit.>.The engineering design workflowspossible with most commercial software are based on disparate geometry and analysis models where expensive and error-prone model conversion processes are required. Considering the iterative nature of design, such conversion processes can dominate and impede the design process so that alternative solutions are sought. Much research has been presented on the idea of adopting a single common model to represent geometry and analysis fields, with such methods often falling under the umbrella of `isogeometric analysis' (a term initially coined by Hughes et al. <cit.>). A number of isogeometric approaches have, and are, being developed based on non-uniform rational B-splines (NURBS) <cit.>, subdivision surfaces <cit.> and other geometry representations <cit.>. Locally refinable variants of NURBSincludethe T-splines <cit.> and PHT-splines <cit.>, andthe corresponding locally refinable subdivision surfaces include THCCS <cit.> and CHARMS <cit.>. Almost all the mentioned geometry representations are intended for surfaces so that their application to shell and BEM analysis is straightfoward. However, they need to be suitably extended for isogeometric FEM approaches that require volumetric discretisations.In general, CAD software does not provide such volumetric discretisations but we note that recent research is progressing towards automatic generation of analysis-suitable trivariate splines, e.g. <cit.>, and alternative immersed/embedded methods that do not require a boundary-fitted volume mesh <cit.>.For structural-acoustic analysis problems, where the structure can be modelled as a thin-shell and the acoustic pressure field is governed by the time-harmonic Helmholtz equation, a sensible approach is to adopt a coupled FEM/BEM formulation <cit.>.In this way the infinite fluid domain is modelled through a BEMdiscretisation which exhibits significant advantages over a traditional FEM approach.For the latter case, a naïve truncation of meshes that representinfinite domains will lead to unphysical wave reflections that require appropriate absorbing boundary conditions at truncated boundaries, <cit.> and an insufficient mesh resolution can lead to significant dispersion error for high frequency problems. In contrast, BEM formulations do not suffer from unphysical reflected waves and dispersion error but more crucially,only a surface discretisation is required to model infinite acoustic domains. This makes an isogeometric approach forcoupled FEM/BEM analysis with shells a particularly attractive approach where high order discretisations generated through CAD software can be used directly without any requirement for model conversion, meshing or generation of volumetric splines. In addition, high order BEM discretisations are generally accepted as superior over equivalent low order discretisations for acoustic problems <cit.>. Several isogeometric BEM approaches have already been developed based on NURBS <cit.>, T-splines <cit.> and subdivision surfaces <cit.>. In applications where the structure can be approximated as a thin-shellthe Kirchhoff-Love model leads to particularly robust and simple finite element formulations with only the nodal displacements as the degrees of freedom. The most challenging aspect of the discretisation of the Kirchhoff-Love formulation is the need for C^1 continuity prompting the development of high order smooth discretisations. Several NURBS-based isogeometric shell formulations have been proposed in which a high order CAD discretisation is used as a basis for both geometry and analysis thus simultaneously satisfying continuity requirements and side-stepping mesh generation <cit.>.Anearlier approach made use of subdivision surfaces <cit.> which illustrated automatic satisfaction of the C^1 continuity requirement and an ability to handle geometries of arbitrary topology. Since this seminal work, the approach has been extended to shape optimisation <cit.>, non-manifold shell geometries <cit.> and thick shells <cit.>. The present study proposes a method for performing coupled acoustic-structural analysis for shell structures using a common discretisation for geometry and analysis through subdivision basis functions. This facilitates the use of identical basis functions for geometric modelling, structural displacements and acoustic pressures. We adopt the Loop subdivision scheme <cit.> in which global basis functions are constructed from quartic box-splines defined over a triangular surface control mesh. We note that a somewhat similar coupled isogeometric BEM/FEM approach has been presented for simulating the Stokes flow <cit.>. However, here we distinguish the novelty of our work through the use of subdivision basis functions. We organise the paper as follows: a brief overview of Loop subdivision surfaces and their use for numerical analysis is given; the formulation of a BEM discretisation for Helmholtz analysis with subdivision basis functions is introduced; an outline of the coupled system of equations that govern acoustic-structural analysis with Kirchhoff-Love shells is presented;implementation details on how to efficiently handle the large dense system of equations generated through the present approach are outlined; verification of the method through a closed-form solution for acoustic scattering over a thin-shell sphere is shown; and finally, the ability of the approach to analyse arbitrarily complex 3D geometries is demonstrated.We note that the approach is restricted to time-harmonic problems and in the present work we considermedium-frequency problems with a normalised wavenumber up to 80. In all problems it can be assumed that the fluid domain resides on the outside of the shell structure with the position vector 𝐱∈ℝ^3.§ LOOP SUBDIVISION SURFACES Subdivision surfaces were introduced in the 1970s and are now widely used in computer graphics and animation <cit.>. They are also available in most industrial CAD solid modelling packages, including Autodesk Fusion 360, PTC Creo and CATIA. In computer graphics literature, subdivision surfaces are usually viewed as a process for generating smooth limit surfaces through repeated refinement and smoothing of a control mesh. Alternatively, they can be viewed as the generalisation of splines to arbitrary connectivity meshes which is a viewpoint more suitable to finite and boundary element analysis. The subdivision surfaces inherit from the splines their refinability property so that all control meshes generated during subdivision refinement describe exactly the same spline surface.In the present paper we consider the triangular subdivision surfaces proposed by Loop <cit.>, which generalise the quartic box-splines to arbitrary connectivity meshes. Quartic box-splines are defined on shift-invariant three-direction meshes in which each vertex is attached to six triangles. Note that, for the sake of brevity, the treatment of vertices located on the domain boundaries is omitted in this paper. In common with all subdivision schemes, in Loop subdivision each step consists of a refinement and an averaging step. In the refinement step the mesh is refined by splitting each triangle into four triangles, after bisecting the triangle edges. Subsequently, the coordinates of the vertices on the refined mesh are determined as the weighted average of the coordinates of their neighbouring vertices on the coarse mesh. For mesh regions consisting only of regular vertices with six adjacent triangles, the averaging weights are derived from the knot insertion rules for box-splines. For remaining irregular, also referred to as extraordinary or star, vertices the weights are derived through a spectral analysis of the subdivision matrix underlying the refinement process, see <cit.>. The averaging weights depend only on the connectivity of the mesh but not the actual vertex coordinates. The resulting subdivision surfaceisC^2 continuous almost everywhere except at the irregular vertices where it is only C^1. In triangles with three regular vertices, there are twelve quartic non-zero basis functions associated with its and neighbouring triangle vertices, as can be seen inFigure <ref>. Hence, the surface coordinates 𝐱^e within the triangle e can be determined through x⃗^e(ξ_1, ξ_2) = ∑_a=1^12 B_a(ξ_1, ξ_2) P⃗_a where B_a are the box-spline basis functions,ξ⃗= (ξ_1, ξ_2) are two of the barycentric local coordinates in the triangle e and P⃗_a are the coordinates of the twelve control vertices. In triangles with irregular vertices it is necessary to apply first a few steps of subdivision refinement until the considered point lies in a patch of elements so that (<ref>) can again be applied.This is possible sinceduring subdivision refinement all newly created vertices are regular and repeated refinement leads to more and more patches with only regular vertices. As proposed by Stam <cit.> this can be used to devise an algorithm for obtaining subdivision basis functionsN_a (ξ⃗) for triangle patches that contain irregular vertices allowing the geometry to be interpolated asx⃗^e (ξ) =∑_a=1^n_v N_a (ξ) P⃗_a, where n_v are the number of vertices in the patch containing triangle e and its neighbouring triangles (which share a vertex with e) and P⃗_a are the coordinates of the vertices on the coarse control mesh. There is no closed-form expression for N_a (ξ) available, only an algorithm for evaluating it for almost any given ξ. See <cit.> for details. It is clear that on regular patches N_a ≡ B_a so that in the following basis functions are always denoted with N_a.The interpolation equations (<ref>) and (<ref>) can be adapted to provide a discretisation of analysis fields such as displacements in thin-shells and pressure in theHelmholtz equation. The use of a common high-order basis for geometry and analysis makes such an approach inherently isogeometric.It should also be noted that in comparison to traditional finite element interpolations the present Loop subdivision surfaces offer the advantage of providing unique normals and normal derivatives at vertices which both simplifies implementation and allows for superior accuracy.§ A COUPLED BEM/FEM WITH LOOP SUBDIVISION SURFACES BASIS FUNCTIONS§.§ Problem setupWe first consider a domain Ω_s that represents an elastic thin-shell structure immersed in an infinite fluid domain Ω_f. Furthermore, we assume that the behaviour of the thin-shell structure is governed by Kirchhoff-Love shell theory and all variables are time-harmonic with a time dependence of e^-i ω t withω denoting the angular frequency.An acoustic pressure field exists in the fluid domain governed by the Helmholtz equation and the system is excited by an incident plane wave that is impinged on the shell structure with the entire setup depicted in Figure <ref>. The acoustic pressure field at the fluid-structure interface induces displacements in the shell structure and, likewise, normal velocity components of the shell surface induce acoustic pressure gradients on the fluid domain forming a coupled system. Our approach represents the fluid-structure interface with a Loop subdivision surface, and in order to construct the system of equations that governs the behaviour of the coupled system, first the discretisation of each domain is considered separately.We then introduce the final discrete coupled system of equations in Section <ref>. §.§ Infinite fluid domain: collocation boundary element method with Loop subdivision surfaces In the infinite fluid domain Ω_f we wish to determine the complex-valued acoustic pressure field p(𝐱) given a wavenumber k that is governed by the Helmholtz equation∇^2 p(𝐱) + k^2 p(𝐱) = 0 in 𝐱∈Ω_f , where ∇^2 denotes the Laplacian operator. The (total) pressure field is composed of the incident and reflected acoustic pressures throughp (𝐱)= p_inc(𝐱) +p_ref(𝐱) ,and in the case of a plane wave of magnitude P with wavenumber k travelling in direction 𝐝 with |𝐝|=1, the incident acoustic pressure is prescribed with p_inc(𝐱) = Pe^i𝐤·𝐱 with the wavevector𝐤 = k 𝐝. By setting the right-hand-side of (<ref>) equal to the Dirac delta forcing function, the solution corresponds to the Helmholtz fundamental solution which is expressed asG(𝐱,𝐲) = e^ikr/4π rwith r := |𝐱-𝐲| where 𝐱 is the source point and 𝐲 the field point.The corresponding normal derivative of the kernel function of (<ref>) is given by∂ G(𝐱,𝐲)/∂ n = e^ikr/4π r^2 (ikr - 1) ∂ r/∂ nwith ∂ (·)/ ∂ n ≡∇ (·) ·𝐧. By integrating (<ref>) over the domain Ω_f using (<ref>) as a weight function, applying the Green-Gauss theorem and then taking the limit as 𝐱 approaches Γ := ∂Ω_f, the acoustic boundary integral equation is expressed asc(𝐱)p(𝐱)+∫_Γ∂ G(𝐱,𝐲)/∂ np(𝐲)dΓ(𝐲) = ∫_Γ G(𝐱,𝐲) ∂ p(𝐲)/∂ n dΓ(𝐲) + p_inc(𝐱) ,where c(𝐱) is a coefficient that depends on the geometry of the surface at the source point. For 𝐱 located at a point with `smooth' geometry, c(𝐱) = 1/2.Discretisation The boundary Γ is defined in the usual piecewise manner through the union of elements, expressed asΓ =⋃_e=1^n_elΓ_e ,where in the present work the set of elements {Γ_e}_e=1^n_el is defined by the tessellation of the Loop subdivision surface.The acoustic pressure and its normal derivative are discretised with subdivision basis functions N_b(ξ), cf. (<ref>),p^e(ξ) = ∑_b=1^n_v N_b(ξ)p_b^e ∂ p^e (ξ)/∂ n = ∑_b=1^n_vN_b(ξ) ∂ p_b^e/∂ nwhere the coefficients p_b^e and ∂ p_b^e / ∂ n represent nodal coefficients of acoustic pressure and its normal derivatives respectively.Using the tessellation (<ref>) and the approximations (<ref>) the boundary integral equation given by (<ref>) is discretised asc(𝐱)p(𝐱)+ ∑_e=1^n_el∑_b=1^n_v p^e_b∫_Γ_e N_b(ξ) ∂ G(𝐱,𝐲(ξ))/∂ n dΓ_e (ξ)=∑_e=1^n_el∑_b=1^n_v∂ p^e_b/∂ n∫_Γ_e N_b (ξ) G(𝐱,𝐲(ξ))dΓ_e(ξ) + p_inc(𝐱).Collocation In order to generate a system of equations through (<ref>) we adopt a collocation approach whereby the source point 𝐱 is sampled at a discrete number of points on the boundary.An alternative method is to apply a weighted residual (Galerkin) approach where (<ref>) is multiplied by a suitable set of test functions and integrated over the boundary. We adopt the former approach in the present work where significantly faster runtimes are achieved over an equivalent Galerkin implementation.When spline based basis functions, like B-splines, NURBS, subdivision or T-spines, are employed different choices are possible for collocation points. For instance, Greville abscissae, Demko points and the maxima of B-splines have been considered in collocation methods that discretise directly the strong form of the governing equations <cit.>. In the present work we use the maxima of the subdivision basis functions, which are associated with the vertices of the tessellation, as the collocation points.Note that due to the non-interpolatory natureof the subdivision basis functions the coordinates of the collocation point located on the surface and the coordinates of the control vertices are not the same. For a given element Γ_ewe define the set of collocation points contained within the element through its three nodal points with the parametric coordinates𝖢_e := {𝐱^e(0,0), 𝐱^e(1,0), 𝐱^e(0,1) }and construct the global set of collocation points as𝖢 = ⋃_e=1^n_el𝖢_ewith duplicate entries discarded (i.e. each element of (<ref>) is unique).For a regular patch of a Loop subdivision surface the scenario is depicted in Figure <ref>. System of equations By sampling over the set of collocation points 𝖢= {𝐱_1, 𝐱_2, …𝐱_n_cp} and employing an assembly operatorthat maps a given element and local basis function index pairing (e,b) to a global basis function index through B =(e,b), the fully discrete boundary integral equation is written as1/2p(𝐱_A)+ ∑_e=1^n_el∑_b=1^n_v p_B∫_Γ_e N_b(ξ) ∂ G(𝐱_A,𝐲(ξ))/∂ n dΓ_e (ξ)=∑_e=1^n_el∑_b=1^n_v∂ p_B/∂ n∫_Γ_e N_b (ξ) G(𝐱_A,𝐲(ξ))dΓ_e(ξ) + p_inc(𝐱_A)A = 1,2,…, n_cpwhere p_B = p_A(e,b) = p_b^e and likewise for ∂ p_B / ∂ n. n_cp is the number of collocation points. The final system of equations is then written asHp = Gq + p_incwherep and q represent vectors of global acoustic pressure and acoustic pressure normal derivative coefficients respectively, H and G are dense matrices with entries H_AB = 1/2N_B(𝐱_A) + ∫_Γ N_B(y) ∂ G(𝐱_A,𝐲)/∂ n dΓ (y) G_AB = ∫_Γ N_B (y) G(𝐱_A,𝐲)dΓ(y)and p_inc is a vector with components p_inc(𝐱_A). The integral in (<ref>) is found to be weakly singular and can therefore be integrated in a straightforward manner using e.g. polar integration (see <cit.> for details). In the present work we adopt a singularity subtraction technique <cit.> whereby the integral in (<ref>) is regularised using the identity ∫_Γ∂ G^s(𝐱, 𝐲)/∂ n dΓ(𝐲)= - 1/2 with G^s(𝐱, 𝐲) denoting the `static' version (i.e. k = 0) of the Helmholtz kernel.§.§ Structural domain: finite element method for dynamic analysis of Kirchhoff-Love shellsThe mechanical response of thin-shells is approximated with a Kirchhoff-Love model.In the following we provide a summary of thecorresponding weak form ofequilibrium equations and refer to <cit.> for a more detailed presentation. The weak form, i.e. the virtual work expression, for the shell with a mid-surface Γ and displacements u⃗ reads W_mas (u⃗, v⃗) + W_int (u⃗, v⃗) + W_ext (v⃗)= 0 . Here,the three terms W_mas (u⃗, v⃗), W_int (u⃗, v⃗) and W_ext (v⃗) denote the virtual work contributions of theinertia, internal and external forces respectively and v⃗ are the virtual displacements. For a thin-shell with aerial density ρ_s the inertia contribution is given by W_mas (u⃗, v⃗) = ∫_Γρ_s ∂^2 u⃗/∂ t^2·v⃗μ dΓ where μ isa Jacobian taking care of integration across the thickness of the shell.Only the acceleration of the mid-surface is considered. The contribution of the angular acceleration associated with theshell mid-surface normal has been neglected. The internal work W_int (u⃗, v⃗) consists of two parts, namely the membrane and bending parts: W_int ( u⃗, v⃗)=∫_Γα⃗(u⃗) : E⃗ : α⃗( v⃗)μ dΓ + h^2 ∫_Γβ⃗(u⃗) : E⃗ : β⃗( v⃗) μ dΓ.The membrane part, the first integral, depends on the constitutive tensor E⃗ and change of the metric tensor α⃗(u⃗) of the mid-surface between the reference and deformed configurations of the shell. The second integral representing the bending part is multiplied with the square of the shell thickness hand depends on change of curvature tensor β⃗(u⃗) between the reference and deformed configurations. Finally, the external virtual work for a shell loaded with pressure loading p, such as considered in this paper,is given byW_ext ( v⃗)= -∫_Γ p n⃗_S ·v⃗ dΓwhere n⃗_S denotes the normal to the mid-surface.The mid-surface displacements u⃗ are assumed to be time harmonic so that applying a Fourier transformation to the weak form (<ref>) leads to its time-harmonic form. With a slight abuse of notation we denote in the following the Fourier transform of the displacements u⃗ with the same symbol; that is from now on u⃗ denotes the Fourier transform of displacements. The weak form for the shell in its time-harmonic form reads -ω^2 ∫_Γρ_su⃗·v⃗μ dΓ+ W_int (u⃗, v⃗) + W_ext (v⃗)= 0 with the angular velocity ω. For discretising the displacements u⃗ and test functions v⃗ we use the same tessellation as for the fluid domain and the Loop subdivision basis functions N_b (ξ). In each element e the two fields u⃗and v⃗ are approximated withu⃗^e (ξ) = ∑_b=1^n_v N_b(ξ) u⃗^e_b andv⃗^e (ξ) = ∑_b=1^n_v N_b(ξ) v⃗^e_b ,where the coefficients u⃗^e_band v⃗^e_b can be interpreted as nodal quantities.Introducing the discretisation into the time-harmonic weak form (<ref>) and evaluating the integrals with numerical quadrature yields, after linearisation, a system of equations that governs the time-harmonic behaviour of displacements( - ω^2 M +K)u= f, where K is the stiffness matrix, M the mass matrix and f is the global force vector. For explicit expressionsfor the stiffness matrix and implementation details see <cit.>. In order to consider damping effects the discrete system of equations (<ref>) can be augmented with a viscous Rayleigh damping term( - ω^2 M + i ω (c_1K + c_2M )+ K)u= f with c_1 and c_2 representing two experimentally determined constants. Finally, we simplify (<ref>) to the form Au = f. §.§ Coupled formulation The pressure field in the acoustic fluid domain induces a force on the shell surface that is directed along the surface normal 𝐧_f. The array of shell nodal forces f in (<ref>)due to a fluid pressure field interpolated according (<ref>)is given by f =ñ_f∫_ΓN^T N dΓp=C_sfp with the matrix of vertex normals ñ_f, matrix of subdivision basis functions N and array of fluid nodal pressures pñ_f =[ n⃗_1f·e⃗_10…; n⃗_1f·e⃗_20…; n⃗_1f·e⃗_30…;0 n⃗_2f·e⃗_1…;0 n⃗_2f·e⃗_2…;0 n⃗_2f·e⃗_3…;………;] N =[N_1(𝐱)N_2(𝐱) … N_n_cp(𝐱) ] p =[p_1p_2… p_n_cp ] where (e⃗_1, e⃗_2, e⃗_3) are the three (orthogonal) base vectors such that each column of ñ_f contains the normal components at each of the n_cp control vertices in the mesh. The transfer matrix C_sf of dimension 3n_cp× n_cp transfers nodal values from the fluid to the shell. The normal components of the fluid and structural velocities denoted by v^n_f and v^n_s respectively can then be related to the acoustic pressure vector through v^n_f - v^n_s = 0 . The acoustic pressure normal derivative q is known to be related to the fluid normal velocity v_f^n asq = - iωρ_fv^n_f ,where ρ_f ≡ρ is the density of fluid. We consider the problem as a fully coupled fluid structure interaction model and as such no velocity loss occurs between the fluid surface and structural surface but we remark that such models could be easily incorporated within our approach. The structural normal velocity v_s^n is related to the nodal displacements u throughv^n_s =iωC_fsu,where C_fs = ñ_f^T. Substituting (<ref>) and (<ref>) into (<ref>) for v^n_f and v^n_s respectively,the desired relationship between acoustic pressure normal derivatives and structural displacements is written asq =ω^2ρC_fsu . Using (<ref>) to substitute for q in the boundary element system of equations given by (<ref>), a coupled system for the acoustic problem is given by Hp = Gω^2ρC_fsu + p_inc .Likewise, by substituting (<ref>) into (<ref>) a coupled system for the structural dynamics problem is written asAu = C_sfp + f_s.where f_s contains nodal shell forces due to external loading other than fluid pressure. Finally, by combining (<ref>) and (<ref>), the global coupled system of equations is expressed as [ [A-C_sf; -ω^2ρGC_fsH;]] [ [ u; p; ]] =[ [ f_s; p_inc; ]]. § IMPLEMENTATIONThe system of equations given by (<ref>) is non-symmetric and contains dense partitions that arise from the dense matrices H and G. Application of a direct solver would lead to inordinately long runtimes and excessive memory demands and we therefore outline a modified version of (<ref>) that is more amenable for computations.We first rewrite (<ref>) using its Schur complement as Hp =ω^2ρGC_fsA^-1(f_s +C_sfp) + p_inc , where u = A^-1( f_s+C_sfp) has been employed.Defining the vector q_s which accounts for the contribution of acoustic velocities from the structural domain asq_s = ω^2ρC_fsA^-1f_sand a global admittance matrix Y_C that represents the admittance effect caused by the structure asY_C = ω^2ρC_fsA^-1C_sf .The system of equations of (<ref>) is then written as [H-GY_C]p = Gq_s + p_inc . When solving (<ref>) two important considerations must be made: computation of A^-1and efficient representation of the dense matrices H and G. For the former, a sensible strategy is to approximate matrix inverses using a singular value decomposition (SVD) or a modal analysis approach in much the same manner as <cit.>. Several libraries exist which allow for efficient SVD computations(e.g. <cit.>) but we note that for the examples in the present study no such approximations were required.The representation of dense matrices requires a more involved implementation and in the present work we choose to adopt an ℋ-matrix approach which approximates dense matrices through low-rank approximations using the library HLibPro <cit.>. Without delving into the details of ℋ-matrix theory <cit.>, we simply state that the algorithm computes a low-rank approximation of H and G through a specified tolerance ε by utilising a hierarchical `cluster tree' which separates terms into far-field (admissible) and near-field sets (non-admissible). The cluster tree is defined through a set of coordinates and bounding boxes relatedto the underlying basis of the boundary element discretisation. In the present work we use the set of collocation points given by (<ref>) and the set of bounding boxes specified as 𝖡 := {Q_min(𝐱) : 𝐱∈supp(N_A)}_A=1^n_cp where Q_min(𝐱) represents the minimum bounding box containing 𝐱. An example bounding box for a Loop subdivision basis function is illustrated in Figure <ref>. Once low-rank approximations are computed for H and G, we solve (<ref>) using a GMRES iterative solver in combination with an approximation of the inverse operator computed through a triangular factorization. We note that due to the relatively large support of the Loop subdivision basis there is a reduction in the number of admissible interactions over traditional discretisations, but this price is outweighed by the superior accuracy of the high-order basis. § NUMERICAL TESTS§.§ Plane wave scattering problem: elastic spherical shell The problem of a plane wave impinged on an elastic spherical shell immersed in an infinite fluiddomain is illustrated in Figure <ref> with Table <ref> specifying all geometry and material properties adopted in the present study.The same material properties can be assumed in all included numerical examples unless specified otherwise.We choose a plane wave of unit magnitude travelling in the positive x direction given by p_inc = e^ikx (i.e. P=1, 𝐝= (1,0,0)^T).The solution to this problem can be determined analytically <cit.> which we reproduce here for completeness.The total acoustic pressure p_t ≡ p can be decomposed into scattered and elastic components asp_t = p_scat + p_elawhere p_scat is the acoustic pressure that would result from scattering over a rigid sphere and p_ela is the radiated acoustic pressure resulting from elastic shell vibrations. Defining a polar coordinate system (r,θ) that lies in the xy plane as shown in Figure <ref>, the scattered and elastic pressure components can be expressed asp_scat(r,θ) = p_0∑_n=1^∞-i^n(2n+1)j_n'(kr_s)/h_n'(kr_s)P_n(cosθ)h_n(kr)andp_ela(r,θ) = p_0∑_n=1^∞i^n(2n+1)ρ_f c/(Z_n+z_n)[kr_sh_n'(kr_s)]^2P_n(cosθ)h_n(kr) where p_0 ≡ P is the incident wave magnitude, P_n is the nth Legendre function, h_n and h_n' are the nth Hankel function and its derivative respectively, and j_n' is the derivative of the nth spherical Bessel function.Z_n denotes the invacuo modal impedance of the spherical shell calculated as Z_n = - i ρ_s c_p/Ωh/r_s[Ω^2 - (Ω_n^(1))^2][Ω^2 - (Ω_n^(2))^2]/[Ω^2-(1+β^2)(ν + λ_n -1)] where λ_n = n(n+1), Ω = ωr_s/c_p is a dimensionless driving frequency, β^2 = h^2/12r_s^2, c_p is the velocity of compressional waves in the structure given by c_p = √(E/(1 - ν^2)ρ_s)and Ω_n^(1) and Ω_n^(2) are dimensionless natural frequencies of the spherical shell determined from the two positive roots of the polynomial equation Ω_n^4 - [1+3ν+λ_n - β^2(1 - ν - λ_n^2 - νλ_n)]Ω_n^2+ (λ_n - 2)(1 - ν^2) + β^2[λ_n^3 - 4λ_n^2 + λ_n(5 - ν^2) - 2(1 - ν^2)] = 0. Finally, z_n is the modal specific acoustic impedance expressed as z_n = iρ_f ch_n(kr_s)/h'_n(kr_s).§.§.§ Geometrical error studyThe first numerical study we conduct is to verify convergence characteristics of the present method to the analytical solution given by Equations (<ref>) to (<ref>).We construct the system of equations given by (<ref>) using our Loop subdivision discretisation procedure and choose a relatively low normalised wavenumber of ka=10, where a is the diameter of the spherical shell.Four Loop subdivision control girds are generated from an initial control mesh shown in Figure <ref> using two strategies: * Subdivision: the Loop subdivision refinement algorithm of <cit.> is applied to the initial coarse control mesh to generate two successively refined control meshes (a) and (b).The limit surface of the coarse control mesh and meshes given in Table <ref> (a) and (b) isidentical (see Figure <ref>) but exhibits a non-negligible geometry errors due to its deviation from a sphere. For analysis evidently the approximation basis provided by control mesh (b) provides a richer approximation space over (a). * Least squares fitting: two successively refined control meshes (c) and (d) are generated by performing a least square fitting, or L_2 projection, of the subdivision surface to a sphere with diameter a = 1. With this refinement strategy the geometry error is successively reduced but the same approximation basis as for the equivalent control meshes (a) and (b) is generated. For each control mesh we calculate the relative geometry error ε_g of the limit surface asε_g = || 𝐱^h - 𝐱||_0/||𝐱||_0where 𝐱^h and 𝐱 are physical coordinates on the Loop subdivision surface and analytical surface respectively with||·||_0 := (∫_Γ (·)^2 dΓ)^1/2.Figure <ref> illustrates the initial control mesh with its associated limit surface. Similar illustrations are shown in Figure <ref> for control mesh (d).Table <ref> and <ref> details each of the control meshes for both refinement strategies showing that geometry error remains constant during subdivision refinement (the limit surface is independent of subdivision refinement) and converges to zero when control vertices are L_2 projected onto the analytical sphere surface. We compute the magnitude of the complex-valued total acoustic pressure |p_t| = ( Re(p_t)^2 + Im(p_t)^2 )^1/2 at sample points locatedon the xy plane of the sphere surface. Results for control meshes (a) and (b) are shown in Figure <ref> which illustrate convergence to a solution that is distinct from the analytical solution.In contrast, the results for control meshes (c) and (d) (Figure <ref>) illustrate convergence to the analytical solution and thus demonstrate the importance of controlling geometry error in the context of coupled structural-acoustic problems.The relatively small geometry error of 0.93% leads to sample point solution errors in the order of 20% and therefore care must be taken to ensure that the limit surface provides an accurate representation of required model geometry—further application of the Loop subdivision refinement algorithm will not overcome the inherent error induced by the incorrect geometry representation. In the case that the limit surface is an accurate representation of the sphere our method converges to the analytical coupled solution and thus verifies our implementation. We remark that for scenarios in which an exact sphere representation is required there exist special subdivision schemes but we do not consider these in the present study.§.§.§ Medium frequency problemsWe now consider the ability of the present method to handle medium frequency problems and determine the upper frequency limits of our Loop subdivision discretisation.It is well-known that when traditional low-order discretisations are used for wave problems involving medium to high frequencies high resolution meshes must be used in order to control approximation error.A common rule of thumb is to use ten elements per wavelength that can lead to extremely large systems of equations in the case of complex geometries and high frequencies and it is therefore desirable to use a higher order basis that reduces the number of elements per wavelength and associated system of equations.We use the Loop subdivision discretisations generated through control meshes (c) and (d) shown in Section <ref> (see Table <ref> for details) and determine the upper frequency limit of each discretisation by applying a set of increasing normalised wavenumbers ka=10,30,40,50,60,80.Defining a set of sample points on the sphere surface aligned with the xy plane as 𝖲 = {s_1, s_2, …, s_n_sample} we introduce the maximum pointwise error in the sample set as max_𝐬∈𝖲| |p_t^h(𝐬)| - |p_t(𝐬)| |/| | p_t(𝐬)| |_∞ where p_t^h and p_t represent numerical and analytical total acoustic pressures respectively.Pointwise errors calculated through (<ref>) for discretisations generated through control meshes (c) and (d) are tabulated in Table <ref> for each wavenumber.The approximate number of elements per wavelength is detailed for each case.Plots of |p_t| against analytical solutions for ka=30,40 with control mesh (c) are shown in Figures <ref> and <ref> respectively (Figure <ref> illustrates results for ka=10) and similar plots are also shown for ka=50,60,80 for control mesh (d) in Figures <ref>, <ref> and <ref> respectively. Figure <ref> illustrates the real part of the total acoustic pressure on the surface on the sphere for ka=80. An inspection of total acoustic pressures profiles and maximum pointwiseerrors for control mesh (c) reveals the expected general trend of increasing errors for increasing wavenumbers.Figure <ref> indicates a discrepancy between the analytical and numerical potential magnitudes at θ=π which is in fact caused by geometry error (further subdivision refinement of control mesh (c) indicates that the solution has converged).For ka=40 large errors are encountered ( ∼8.5% maximum pointwise error) and the discretisation is deemed insufficiently fine.The more refined basis generated through control mesh (d) allows for much higher wavenumbers and it is found that for wavenumbers up to ka=60 low errors are obtained with maximum pointwise errors less than 2.1%. For ka=80 we find that the maximum wavenumber for control mesh (d) is reached but remark that even for this relatively high wavenumber a maximum pointwise error of ∼ 5.5% is seen. Based on the present results we advisea conservative estimate of six elements per wavelength using a Loop subdivision discretisation to obtain maximum pointwise errors less than 5%.The guidance of six elements per wavelength is in agreement with the study of <cit.>. §.§.§ Coupling effect study To demonstrate the coupling effect created by elastic displacements of the shell structure we conduct three studies using the sphere geometry with our Loop subdivision discretisation approach: *Acoustically hard surface: we specify the surface as acoustically hard which eliminates the coupling effect. The normal derivative of the acoustic potential is set to zero on the sphere surface (i.e. ∂ p/ ∂ n = 0 for all 𝐱∈Γ) and no displacement of the shell occurs. *Elastic shell, h=0.1m: a fully coupled system is formed where shell vibrations result in a radiated acoustic pressure. *Elastic shell, h=0.05m: the same study as for Item <ref> but with a reduced shell thickness. We specify a low wavenumber of ka=6 in keeping with the acousticscattering study of <cit.> and use the initial control mesh to generate a Loop subdivision discretisation with approximately thirteen elements per wavelength.Results for scenarios <ref>, <ref> and <ref> are illustrated in Figures <ref>, <ref> and <ref> respectively.A comparison of the far-field acoustic potential magnitude for the hard sphere case against the elastic cases (Figures <ref>, <ref> and <ref>) reveals the noticeable effect of the radiated acoustic pressure caused by shell vibrations.It is also clear that a decrease in shell thickness has a substantial effect on the scattered profile where a region of low pressure is created in the shadow region (see Figure <ref>) which is not observed in the other cases.In addition, an inspection of Figures <ref>, <ref>, <ref> and <ref> reveals that the maximum value of Re(p) shifts from the illuminating region to the shadow region when the shell thickness is reduced.We finally remark that as expected, shell displacements increase when shell thickness is reduced (Figures <ref>,<ref>, <ref>and <ref>) with maximum values consistently obtained at a position opposite to the incident point (first point of contact on the surface by the incident wave).§.§ Submarine modelWe now consider the ability of the present method to provide high-order discretisations of arbitrarily complex geometries. The first model we consider is a submarine model with a control mesh illustrated in Figure <ref> and limit surface as shown in Figure <ref>. The minimum bounding box for this model is defined by [x_i^min,x_i^max]^3 =[-51.3,41.0][-58.4,17.8][-11.8,11.8]. The model is symmetric about the xy plane. The material properties as specified in Table <ref> are applied with a shell thickness of h=0.5m. We construct a forcing function through an incident plane wave with unit magnitude travelling in the negative x-direction and a normalised wavenumber of ka = 46.15. A comparison between profiles of the acoustic pressure magnitude are illustrated in Figure <ref> for both an acoustically hard surface and elastic shell formulation with the effect on the radiated acoustic pressure caused by shell vibrations apparent.We primarily use this example to illustrate the ability of our approach to handle large models of industrial relevance makinguse of smooth geometry representations that are generated by CAD software. §.§ Complex topology exampleThe final example we consider is that of a Loop subdivision surface with complex topology as illustrated in Figure <ref>. Such topologies are often challenging for parametric surfaces based on tensor product formulations but present no issues for subdivision surfaces.The material properties and shell thickness as detailed in Table <ref> are prescribed and a plane wave of unit magnitude travelling in the positive x direction is specified with a normalised wavenumber of ka=3.Plots of acoustic pressure magnitude sampled over the model surface and xy plane are illustrated in Figures <ref> and <ref> respectively where a localised region of high acoustic pressure is created in the model interior. We remark that models with complex topology such as the present problem are often encountered in electromagnetic and acoustic scattering over metamaterial structures that exhibit non-intuitive scattered profiles and we envisage that our approach will have key benefits for such applications, particularly when applied to topology and shape optimisation. § CONCLUSION We have presented a novel BEM/FEM coupled method for structural acoustic analysis of shell geometries using Loop subdivision discretisations. Our approach utilises a collocation approach to generate a system of equations of the fluid domain using a boundary element formulation and a traditional Galerkin finite element method with Kirchhoff-Love shell theory to discretise the structural domain. ℋ-matrices are employed to construct efficient low-rank approximations of dense matrices allowing for boundary element models generated through Loop subdivision discretisations with over 10,000 vertices (40,000 degrees of freedom).Weverify our method through a closed-form solution of acoustic scattering over an elastic spherical shell geometry and demonstrate solutions of normalised wavenumbers up to ka=80.Finally, the ability to model arbitrarily complex geometries with smooth surfaces generated through the Loop subdivision scheme is demonstrated thus highlighting the benefits of the present method for industrial design scenarios involving structural-acoustic interaction and optimisation of geometries with complex topologies. wileyj
http://arxiv.org/abs/1704.08491v2
{ "authors": [ "Zhaowei Liu", "Musabbir Majeed", "Fehmi Cirak", "Robert N. Simpson" ], "categories": [ "math.NA" ], "primary_category": "math.NA", "published": "20170427095010", "title": "Isogeometric FEM-BEM coupled structural-acoustic analysis of shells using subdivision surfaces" }
Instituto de Ciencias de Materiales de Madrid (CSIC), 28049, Madrid, Spain. [email protected] Institut de Radioastronomie Millimétrique, 38406, Saint Martin d’Hères, France. LERMA, Obs. de Paris, PSL Research University, CNRS, Sorbonne Universiteés,UPMC Univ. Paris 06, ENS, F-75005, France.Chalmers University of Technology, Onsala Space Observatory, 43992 Onsala, Sweden. OASU/LAB-UMR5804, CNRS, Universiteé Bordeaux, 33615 Pessac, France. Observatorio Astronómico Nacional (IGN). Apartado 112, 28803 Alcalá de Henares, Spain.We report high angular resolution (4.9”×3.0”) images ofreactive ions SH^+, HOC^+, and SO^+ toward the Orion Bar photodissociation region (PDR). We used ALMA-ACA to mapseveral rotational lines at 0.8 mm, complemented with multi-line observationsobtained with the IRAM 30 m telescope.The SH^+ and HOC^+ emission is restricted to a narrow layer of 2”- to 10”-widthdepending on the assumed PDR geometry) that follows the vibrationally excited H_2^* emission. Both ions efficiently form very close to thetransition zone, at a depth of A_ V≲1 mag into the neutral cloud, where abundant C^+, S^+, and H_2^* coexist. SO^+ peaks slightly deeper into the cloud. The observedions have low rotational temperatures() and narrow line-widths, a factor of ≃2 narrower that those of the lighterreactive ion CH^+. This is consistent with the higher reactivity and faster radiative pumping rates ofCH^+compared to the heavier ions, which are driven relatively faster toward smaller velocity dispersion by elastic collisions and toward lower T_ rot by inelastic collisions. We estimate column densities andaverage physical conditionsfrom an excitationmodel(, , and ).Regardless of the excitation details,SH^+ and HOC^+ clearly trace the most exposed layers of the UV-irradiated molecular cloud surface, whereas SO^+ arises from slightly more shielded layers.Spatially resolved images of reactive ions in the Orion BarThis paper makes use of the following ALMA data: ADS/JAO.ALMA#2012.1.00352.S.ALMA is a partnership of ESO (representing its member states), NSF (USA), and NINS (Japan), together with NRC (Canada), and NSC and ASIAA (Taiwan), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO, and NAOJ.^,Includes IRAM 30 m telescope observations. IRAM is supported by INSU/CNRS (France), MPG (Germany), and IGN (Spain). Javier R. Goicoechea1 Sara Cuadrado1 Jérôme Pety 2,3 Emeric Bron 1,3 John H. Black 4 José Cernicharo 1 Edwige Chapillon 2,5 Asunción Fuente 6 Maryvonne Gerin 3Received 1 March 2017 /Accepted 21 April 2017 ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTIONReactiveionsare transient species for which the timescale of reactive collisions with H_2, H, or e^- (leading to a chemical reaction, and thus molecule destruction) is comparable to, or shorter than, that of inelastic collisions . The formation ofreactive ions such as CH^+ and SH^+ depends on the availability of C^+ and S^+ (i.e., of UV photonsand thus high ionization fractions x_e=n(e^-)/n_ H), and on the presence of excited H_2 (eitheror hot and thermally excited).This allows overcoming the high endothermicity (and sometimes energy barrier) of some of the key initiating chemical reactions <cit.>. The reaction , for example, is endothermic byif v=0, but exothermic and fast for v≥1 <cit.>. Despite their short lifetimes, reactive ions can be detected and used to probe energetic processes in irradiated, circumstellar <cit.>,interstellar <cit.>, or protostellar <cit.> gas. CO^+ and HOC^+ (the metastable isomer of HCO^+) have been detected in low angular resolution observations of clouds near massive stars <cit.>. They are predicted to form close to the H/H_2 transition zone, the dissociation front (DF), byhigh-temperature reactions of C^+ with OH and H_2O, respectively. HOC^+ also forms by the reaction ; thus,CO^+ and HOC^+ abundances are likely related <cit.>. In photodissociation regions (PDRs), SO^+ is predicted to form primarilyvia the reaction<cit.>. These ion-neutral reactions leading to CO^+, HOC^+ and SO^+ are highly exothermic.allowed thedetection of OH, CH^+, and SH^+ emissiontoward dense PDRs <cit.>.Unfortunately, the limited size of the space telescope did not permit us to resolve the ΔA_V≲1 mag extent of the DF (a few arcsec for the closest PDRs). Therefore, the true spatial distribution of the reactive ions emission is mostly unknown. Unlike CH^+,rotational lines of SH^+ can be observed from the ground <cit.>.Here we report the first interferometric images of SH^+, HOC^+, and SO^+. The Orion Bar is a dense PDR <cit.>illuminated by a far-UV (FUV)fieldof a few 10^4 times the mean interstellar radiation field. Because of its proximity<cit.> and nearly edge-on orientation, the Bar is a template to investigate the dynamics and chemistry instronglygas <cit.>. § OBSERVATIONS AND DATA REDUCTION The interferometric images were taken with the 7 m antennas of the Atacama Compact Array (ACA), Chile.The observations consisted of a 10-pointing mosaic centered at ;. The total field-of-view (FoV) is ∼50”×50”. Target line frequencies lie in the ∼345-358 GHz range (Table <ref>.1). Lines were observed with correlators providing ∼500 kHz resolution over a 937.5 MHz bandwidth. The ALMA-ACA observation time was ∼6 h.In order to recover theextended emission filtered out by the interferometer, we used fully sampled single-dish maps as zero- and short-spacings. The required high-sensitivity maps were obtained using the ALMA total-power 12 m antennas (∼19” resolution). We used the GILDAS/MAPPING softwareto create the short-spacing visibilities not sampled by ALMA-ACA. These visibilities were merged with the interferometricobservations <cit.>.The dirty image was deconvolved using theHögbom CLEAN algorithm.The resulting cubes were scaled from Jy/beam to brightness temperature scale using the synthesized beam size of 4.9”×3.0”. The achieved rms noise is ∼10-20 mK per 0.5 km s^-1 smoothed channel, with an absolute flux accuracy of ∼10%. The resulting images are shown in Fig. <ref>.In addition, we carried out pointed observations toward the DF with theIRAM 30 mtelescope (Spain). The observed position lies roughly at the center of the ALMA-ACA field (see circles in Fig. <ref>). This position is at ΔRA=+3” and ΔDec=-3” from the of <cit.>.These multi-line observations are part of a complete 80-360 GHz line survey atresolutionsbetween ∼30” and ∼7” <cit.>.§ RESULTS: MORPHOLOGY AND EMISSION PROPERTIES In addition to the submillimeter (submm) emission images obtained with ALMA-ACA, panels b), c), and d) in Fig. <ref> show images of the DF traced by the vibrationally excited molecular hydrogen (H_2^*) line <cit.>, of the atomic PDR (hydrogen is predominantly in neutral atomic form) as traced by the Spitzer/IRAC 8 μm emission <cit.>, and of the ionization front (IF), the H/H^+ transition zone. The δ x axis shows the distance in arcsec to the IF. Thus, in each panel, the FUV-photon flux decreases from right to left. The peak of the optically thickline provides a good lower limit to the gas temperature in the molecular PDR (), with(Fig. <ref>a).Theimages show that, except for SO^+, the emission from reactive ionsstarts very close to the DF, and globally follows that of H_2^*. On small scales(≲3”≈1000 AU), severalemission peaks of these ions coincide with the brightest H_2^* peaks (e.g., at δ y ≃ -18”). Although the SH^+ and HOC^+ peaks at δ y ≃ +27” do not exactly match a H_2 v=1-0 peak, observations do show the presence of extended H_2and 1-0 emission along the SH^+ andzone <cit.>. In fact, H_2^* emission from very highvibrational levels (up to v=10 or E/k≈50,000 K) has recently beenreported <cit.>.To investigate the molecular emission stratification, inwe show averaged emission cuts perpendicular to the Bar. Thecuts demonstratethat theandlinesarise from a narrow emission layer (akin to a filament), with a half-power-width ofΔ(δ x)≃10” (≃0.02 pc), that delineates the DF. Theline displays this emission peak close to the DF, as well as another peak deeper inside the molecular cloud(at δ x≈30”) that is dominated by emission fromthe colder molecular cloud interior. The line arises from these morezones.Thesespatial emission trendsare supported by the different line-widths (averaged over the ACA field of view, see Table <ref>.1). In particular, thelineis narrower (Δv=1.8±0.1 km s^-1) than the(Δv=2.1±0.1 km s^-1),(Δv=2.7±0.1 km s^-1), and(Δv=2.8±0.1 km s^-1) lines that arise from the more FUV-irradiated gas near the molecular cloud edge. We first derived H^13CO^+, HOC^+, SO^+, and SOrotational temperatures (T_ rot) and column densities (N)by building rotational population diagrams from our observations <cit.>.Results are shown in Tables <ref> and <ref>. H^13CO^+ and HOC^+ have high dipole moments(and SO^+ to a lesser extent, but see ).Hence, the observed submm lines have moderate critical densities (several 10^6 H_2 cm^-3). The low-J transitions toward theline survey positionare subthermal (). Their column densities are relatively small: from ∼10^11 cm^-2 (assuming uniform beam filling), to ∼10^12 cm^-2 (for a more realistic filamentary emission layer of ∼10” widthand correcting for beam dilution).In addition, we estimated the averagephysical conditions that lead to the H^13CO^+, HOC^+, SO^+ and SH^+ emission toward the line survey position. We used aMonte Carlo model (Appendix <ref>) that includes inelastic collisions with H_2 and e^-, as well as radiative excitation by the far-IR dustradiation field in the region <cit.>. This allowed us to refine the source-averaged column density estimation for aemission layer(Table <ref>).The observed H^13CO^+, HOC^+, and SO^+line intensities and T_ rot are reproduced with , , and (thus consistent with .However, with the set of assumed SH^+ collisional rates, fitting the SH^+ lines requires denser gas, ∼10^6 cm^-3 <cit.>.As T_ k is expected to sharply vary along the DF <cit.>, the result of these single-component models should be taken as average conditions (over the ACA resolution). We note that at the distance to Orion, 4” is equivalent to 1 mag of visual extinction for . Low [HCO^+]/[HOC^+] abundance ratios (∼200-400) have previouslybeen inferredfrom lower angular resolution (10” to 70”, depending on the line and telescope) pointed observations toward the Bar <cit.>.Such ratios are much lower than those predictedin FUV-shielded gas <cit.>. Given the similar A_ ul coefficient and upper level energy (E_ u/k) of theandtransitions, their integrated intensity ratio is a good measure of the [HCO^+]/[HOC^+] isomeric ratio (lines are optically thin). The observedline ratio is equivalent to a roughly constant [HCO^+]/[HOC^+] ratio of 145±5 along the δ x=12-22” layer (assuming [HCO^+]/[H^13CO^+]=67). Theratio then increases as the FUV-photon flux decreases, [HCO^+]/[HOC^+]=180±4 along the δ x=22-32” layer, until thesignal vanishes deeper inside the molecular cloud.§ DISCUSSION ALMA HCO^+ 4-3 images at ∼1” resolutionsuggest the presence of high-pressure structures,, close to the DF <cit.>.Despite the different resolutions,several of these HCO^+ peaks coincide with the brightestand OH 84 μm peaks (Figs. <ref>e and f). Thus, it is reasonable to assume that SH^+ arises from these structures. We used version 1.5.2 of the Meudon PDR code <cit.> to model the formation and destruction of reactive ions in a constant thermal-pressure slab of FUV-irradiated gas ( fromto ). Figure <ref>b shows abundance profiles (with respect to H nuclei) predicted by the high-pressure model. The FUV-photon fluxis χ=10^4 (in Draine units), the expected radiation field close to the DF. Compared to <cit.>, we have includedmore recent rates for reactions ofS^+ with H_2(v) <cit.>. We adopt an undepleted sulfur abundance of [S]=1.4×10^-5 <cit.>.The predicted abundance stratificationinqualitatively agrees with the observational intensity cut in (but recall that the spatial scales are not directlycomparable, as for the studied range ofin these 1D isobaric models, 10 mag of visual extinction corresponds to 1-10”). To be more specific, we compared the inferred column densities with those predicted by the PDR model ().Because the Orion Bar is not a perfect edge-on PDR, this comparison requires a knowledge of the tilt angle (α) with respect to a pure edge-on geometry, and of the line-of-sight cloud-depth (l_ depth).Recent studies constrain α and l_ depth toand ≃0.28 pc, respectively <cit.>.For this geometry, the intrinsic width of the SH^+- andlayer would be narrower, from≃10” (the observed value) to ≃2” (if the Bar is actually tilted). Given theseuncertainties, the agreement between the range of observed and predicted columns is satisfactory for HOC^+, H^13CO^+, and SO^+ (Table <ref>). Although PDR models with lower pressures predict qualitatively similarstratification, and a reactive ion abundance peakalso at A_ V≲1 mag(whereC^+, S^+, and H_2^* coexist), amodel with ten times lower P_ th underestimates N(HOC^+) and N(SO^+)by large factors (≳20). This is related to the lower predicted OH column densities (by a factor of ∼25),key in the formation of CO^+, HOC^+ and SO^+ . Interestingly, SO^+ peaks deeper inside the cloud, while in the PDR models the main SO^+ peak is close to that of SH^+. Compared to the other ions, SO^+ destruction near the DF is dominated byrecombination andphotodissociation (not by reactions with H_2 or H).Hence, [SO^+] depends on the [OH]/x_e ratio <cit.> and on the photodissociation rate. The SO^+ linepeaks deeper inside thecloud than the OH ^2Π_3/2line(Fig. <ref>f) suggesting that near the DF either n(e^-) is higher than in the model or, more likely, that the SO^+ photodissociation rateis larger. Indeed, SO^+ can be dissociated by lower E photons <cit.>.This process is not taken into account in the PDR model. For SH^+, we infer a column density that is a factor above the PDR model prediction (depending on α).Recall that reaction(endothermic bywhen v=0) only becomes exothermic when v≥2, but remains slow even then <cit.>. The mismatch between model and observation, if relevant, may suggest an additional source of SH^+that is not well captured by the model: overabundant H_2(v≥2) or temperature/pressure spikes due to the PDR dynamics <cit.>.Regardless of the excitation details, our observations show that detecting SH^+ and HOC^+ emission is an unambiguous indication of FUV-irradiated gas. Intriguingly,and HOC^+line-widths are narrower than those of CH^+<cit.>. The broader CH^+ linesin the Bar have been interpreted as a signature ofCH^+high reactivity <cit.>. In this view, the exothermicity (equivalent to an effective formation temperature of about 5360 K) of the dominant formation route, reaction , goes into CH^+ excitation and motion. However, reactive collisions of CH^+ with H and H_2 are faster than elastic androtationally inelastic collisions (see Appendices <ref> and <ref>). In particular, the CH^+ lifetime is so short (a few hours) that the molecule does not have time to thermalize, by elastic collisions, its translational motions to a velocity distribution at T_ k <cit.>. Hence,the broadlines would be related to the energy excess upon CH^+ formation (thousands of K) and not to the actualT_ k <cit.>,nor to an enhanced gas turbulence. Detailed models of the CH^+ excitationshow that inclusion of formation and destruction rates in the level population determination affects the high-J levels <cit.>. Indeed, using the CH^+tointensities measured byHerschel/PACS toward the line survey position <cit.>, we derive T_ rot^ PACS≃150 K. This is significantly warmer thanT_ rot ofSH^+, HOC^+ and SO^+. CH^+ is a light hydride <cit.> meaning that its lowest rotational transitions lie at far-IRfrequencies. Therefore, its critical densities are very high (, cf.at T_ k=200 K). Such values are much higher than thegas density in the Orion Bar. Thus, without an excitation mechanismother than collisions, one would expect T_ rot(CH^+)≪T_ rot(HOC^+). However, CH^+ has a shorter lifetime than the other ions, and shorter than the timescale forcollisional excitations. Hence, the higher T_ rot^ PACS(CH^+) compared to the heavier ions must be related to a formation, and perhaps radiative, pumping mechanism. In particular, thedust continuum emission is much stronger in the than in the (sub)mm (Fig. <ref>). As a consequence, theCH^+ rotational transitions have larger radiative pumping ratesthan the (sub)mm transitions of the heavier ions. As an example, we derive (where B_ lu is the stimulated absorption coefficient and U is the energy density produced by the dust emission and by the cosmic background; see Appendix <ref>). Therefore, CH^+ can be excited by radiation many times during its short lifetime (and during its mean-free-time for elastic collisions) so that it can remain kinetically hot (large velocity dispersion) and rotationally warm while it emits. On the other hand, the heavier ions are driven relatively faster toward smaller velocity dispersion by elastic collisions (narrower lines) and toward lower T_ rot by inelastic collisions. For HOC^+, we estimate that several collisional excitations take place during its lifetime (several hours). This subtle but important difference likely explains thenarrower lines and lower T_ rot of the heavier reactive ions, as well as their slightly different spatial distribution compared to (see Fig. <ref>e). We thankA. Parikka for sharing her Herschel far-IR CH^+ and OH line maps. We thank the ERC for funding support under grant , and the Spanish MINECO under grants AYA2012-32032, AYA2016-75066-C2-(1/2)-P and CSD2009-00038. aa § SPECTROSCOPIC PARAMETERS OF THE OBSERVED LINES § ROTATIONAL POPULATION DIAGRAMS AND COLUMN DENSITIES§ CHEMICAL DESTRUCTION TIMESCALESIn this Appendix we use the rates of the chemical reactions(included in our PDR model) that lead to the destruction of reactive ions, to compute their characteristic destruction timescales in the Orion Bar. For simplicity we adoptT_ k=T_ e=300 K.The destruction timescales of CH^+ by reactive collisions with H_2 andHare and, respectively.In these relations, f_ H_2=2n(H_2)/[n(H)+2n(H_2)] is the molecular gas fraction (<1 close to the DF of PDRs).The CH^+ destruction timescale by dissociative recombination is slower, .On the other hand, SH^+ slowly reacts with H_2 (the reaction is very endothermic). The relevant destruction timescales areand . For HOC^+ we derive and .SO^+ lifetime is longer. The reaction of SO^+ with H is very endothermic. Close to the DF, SO^+ destruction is dominatedby dissociative recombination, with , and by photodissociation, withfor χ=10^4. In our PDR models we assume , which is≃1000, ≃10, ≃5, and ≃4 times faster than the adopted κ_ HCO^+(ph), κ_ CH^+(ph), κ_ OH(ph), and κ_ SH^+(ph) photodissociation rates, respectively (at A_ V=1). A summary of the relevant destruction timescales is presented in Table <ref>. § COLLISIONAL AND RADIATIVE EXCITATION OF H^13CO^+, HOC^+, SH^+, AND SO^+ To estimate the physical conditions of the reactive ionsemitting gas, we solved the statistical equilibrium equations and radiative transfer using a non-local-thermodynamic equilibrium code<cit.>. In PDRs, the high electron density (up to n(e^-) ≃ x( C^+) n_ H≃ 10^-4 n_ H for standard cosmic ray ionization rates)plays an important role in the collisional excitation of molecular ionsand competes with collisions with H_2 and H. This is because the associated collisional excitation rate coefficients γ_ lu(e) (cm^3 s^-1) can be large, up to 10^4 timesγ_ lu( H_2), and thus H_2 and e collisional rates(s^-1) become comparable if the ionization fraction n(e^-)/n_ H is high, .Here we used published (or estimated by us)inelastic collisional rate coefficients[– For H^13CO^+ and HOC^+ (^1Σ^- ground electronic states) we used HCO^+-H_2 de-excitation rates of <cit.>, and specificandde-excitation rates <cit.>. We computed the respective collisional excitation rates through detailed balance. – For SH^+ (^3Σ^-) there are no published collisional rate coefficients. For SH^+-H_2 we simply scaled radiative rates (J. Black, priv. comm.) for atemperature range. For SH^+-e^- collisions we used rate coefficients calculated in the Coulomb-Born approximation for the 10-1000 K temperature range (J. Black, priv. comm.).– For SO^+ (^2Π) there are no published collisional rates. Here wemodeled rotational levels in the ^2Π_1/2 ladder only (the ^2Π_3/2 ladder lies 525 K above the ground, much higher than the T_ rot(SO^+) inferred from our observations) and neglected Λ-doubling transitions. ForSO^+-H_2, we usedCS-He rates of <cit.> (CS has a similar molecular weight and dipole moment: 44 and 2.0 D, compared to SO^+: 48 and 2.3 D. This is the adopted SO^+ dipole moment after Turner 1996). Rate coefficients were multiplied by 2 to account for the ionic character of SO^+. For SO^+-e^-, we used rate coefficients calculated in the Coulomb-Born approximation for the 10-1000 K range (J. Black, priv. comm.). ], and we adopted n(e^-)=10 cm^-3 andin the models, the expected range of densities in the region. In addition to inelastic collisions, we included radiative excitation by absorption of the 2.7 K cosmic background andby the (external) dust radiation field in the region. The latter is modelled as a modified blackbody emission with an effective dust temperature of T_ d=50 K, spectral emissivity index of β=1.6, and a dust opacity (τ_ d) of ≃0.03 at a reference wavelength ofλ=160 μm. The resultingcontinuum levels (Fig. <ref>) reproduce the far-IR photometric measurements toward the Orion Bar; we refer to<cit.>. These authors estimated T_ d≃50 K inside the Bar and T_ d≃70 K immediately in front.Our calculations include thermal, turbulent, and line opacity broadening.The non-thermal velocity dispersion that reproduces the observed line-widths isσ≃1 km s^-1 (with full width at half maximum of 2.355·σ). By varying N, n_ H and T_ k, we tried to reproduce the H^13CO^+, HOC^+, SH^+, andSO^+line intensities observed by ALMA-ACA toward the line survey position as well as the intensities of the other rotational lines detected with thetelescope (correcting them by a dilution factorthat takes into account the beam size at each frequency and an intrinsicGaussian filament shape of the emission). Our best models fitthe intensities by less than a factor of 2. They also reproduce (±3 K) the T_ rotinferred from the rotational populationdiagramsfor afilament (see Table <ref>).ForH^13CO^+, HOC^+, andSO^+, we obtain n(H_2)≃10^5 cm^-3, n(e^-)=10 cm^-3, and T_ k≳200 K. These conditions agree with those inferred by <cit.>for the H^12CO^+ rotationally excited emission(J=6-5 to 11-10) observed by Herschel/HIFI. For SH^+,wereproduce the line intensities observed by ALMA-ACA at ∼345 GHz, and by Herschel/HIFI at ∼526 GHz <cit.> (significantly diluted in the large HIFI beam according to our ALMA-ACA images), if the gas is an order of magnitude denser, ∼10^6 cm^-3 (similar to <cit.> as they used the same estimated collisional rates). §.§ Radiative pumping rates (CH^+ vs. HOC^+)To support our interpretation, here we compare the collisional and radiative pumping rates ofCH^+ and HOC^+ rotational lines with their chemical destruction timescales.The inelastic collisional excitation rate (C_ lu) is given byC_ lu = n( H_2)γ_ lu( H_2) +n(e^-)γ_ lu(e^-) [ s^-1],where we assume that H_2 and e^- are the only collisional partners. The upward excitation collisional coefficients γ_ lu (cm^3 s^-1) are computed from the de-excitation coefficients by detailed balanceγ_ lu = γ_ ul g_u/g_l e^-T^*/T_ k[ cm^3 s^-1],where T^* = hν/k is the equivalent temperature at the frequency ν of the transition.The continuum energy density in the cloud at a given frequency is U^ Dust+CMB = β[U(T_ d) + U(T_ cmb)],where U(T_ d) is the contribution from the external dust radiation field, and U(T_ cmb=2.73K) is the cosmic background. β is the photon escape probability that tends to 1 if the line opacity tends to 0. The radiative pumping rate can be written asB_ luU^ Dust+CMB = βA_ ul [1-e^-τ_ d/e^T^*/T_ d-1 + 1/e^T^*/T_ cmb-1] [ s^-1],where B_ lu and A_ ul are the Einstein coefficients for stimulated absorption and for spontaneous emission, respectively.As an example of the excitation differences between CH^+ and the heavier reactive ions, we note that the CH^+ 4-3 line lies at a far-IR wavelength (λ≃90 μm) where theintensity of thecontinuum emission toward the Orion Bar is ≳200 times stronger than that at the submm HOC^+ 4-3 line wavelength (λ≃837 μm; see Fig. <ref>).We adopt n(H_2)=10^5 cm^-3, n(e^-)=10 cm^-3,, and our model of the continuum emission (Fig. <ref>). With these parameters, using the appropriate collisional rate coefficients <cit.>, andin the β=1 limit, we compute the following collisional and radiative pumping rates C_ 01 ( CH^+) = 3.7 × 10^-5 s^-1(τ≃ 7.6h), B_ 01U^ Dust+CMB ( CH^+)= 1.3 × 10^-4 s^-1(τ≃ 2.2h), C_ 34 ( CH^+) = 5.3 × 10^-6 s^-1(τ≃ 52.7h), B_ 34U^ Dust+CMB ( CH^+)= 2.1 × 10^-3 s^-1(τ≃ 0.1h),for CH^+, andC_ 01 ( HOC^+) = 1.2 × 10^-4 s^-1(τ≃ 2.3h), B_ 01U^ Dust+CMB ( HOC^+) = 1.7 × 10^-5 s^-1(τ≃ 16.4h), C_ 34 ( HOC^+) = 7.9 × 10^-5 s^-1(τ≃ 3.5h), B_ 34U^ Dust+CMB ( HOC^+) = 1.6 × 10^-5 s^-1(τ≃ 16.9h),for HOC^+. The quantities in parenthesis (τ) represent the corresponding timescale for each excitation process . In the case of reactive molecular ions, these rates compete with their chemical destruction timescales (derived in Appendix <ref>). Adopting a gas density of 10^5 cm^-3, f_ H_2=0.5 and x_e=10^-4 for the Orion Bar,the total CH^+ and HOC^+ chemical destruction timescalesareand(see ). Comparing τ_ D(CH^+) with the timescales for collisional and radiative excitation showsthatCH^+ molecules are excited by radiation many times during its short lifetime, but not by collisions.Hence, CH^+ can remain rotationally warm while it emits. Interestingly, the far-IR CH^+ lines observed by PACS <cit.> follow roughlythe same functional shape, ), of the warm dust continuum emission (with T_ d≃50-70 K; Fig. <ref>).Because for CH^+ radiative processes are faster than collisional processes (Table D.2), CH^+ molecules might have equilibrated with the dust radiation field they absorb . However,<cit.> conclude that the high-J CH^+ lines in the Bar are mostly driven by formation pumping; thus, naturally producing warm rotational temperatures. On the other hand, comparing with the representative timescales for collisional and radiative excitation shows that HOC^+ molecules (and likely the other heavier ions as well) are excited by collisions several times during their lifetime. Inelastic collisions thus can drive their rotational populations to lower T_ rot relatively fast.
http://arxiv.org/abs/1704.08621v1
{ "authors": [ "Javier R. Goicoechea", "S. Cuadrado", "J. Pety", "E. Bron", "J. H. Black", "J. Cernicharo", "E. Chapillon", "A. Fuente", "M. Gerin" ], "categories": [ "astro-ph.GA" ], "primary_category": "astro-ph.GA", "published": "20170427152702", "title": "Spatially resolved images of reactive ions in the Orion Bar" }
1Jeremiah Horrocks Institute, University of Central Lancashire, Preston PR1 2HE, United Kingdom 2Centre de recherche en astrophysique du Québec & département de physique, Université de Montréal, C.P. 6128, Succ. Centre-ville, Montréal, QC, H3C 3J7, Canada 3Tokushima University, Minami Jousanajima-machi 1-1, Tokushima 770-8502, Japan 4Institute of Liberal Arts and Sciences Tokushima University, Minami Jousanajima-machi 1-1, Tokushima 770-8502, Japan 5Korea Astronomy and Space Science Institute, 776 Daedeokdae-ro, Yuseong-gu, Daejeon 34055, Republic of Korea 6Korea University of Science and Technology, 217 Gajang-ro, Yuseong-gu, Daejeon 34113, Republic of Korea 7Institute of Astronomy and Department of Physics, National Tsing Hua University, Hsinchu 30013, Taiwan 8Academia Sinica Institute of Astronomy and Astrophysics, P. O. Box 23-141, Taipei 10617, Taiwan 9School of Astronomy and Space Science, Nanjing University, 163 Xianlin Avenue, Nanjing 210023, China 10Key Laboratory of Modern Astronomy and Astrophysics (Nanjing University), Ministry of Education, Nanjing 210023, China 11East Asian Observatory, 660 N. A`ohōkū Place, University Park, Hilo, Hawaii 96720, USA 12NRC Herzberg Astronomy and Astrophysics, 5071 West Saanich Rd, Victoria, BC, V9E 2E7, Canada 13Department of Physics and Astronomy, University of Victoria, Victoria, BC, V8P 1A1, Canada 14Department of Physics and Astronomy, The University of Manitoba, Winnipeg, Manitoba R3T2N2, Canada 15School of Physics and Astronomy, Cardiff University, The Parade, Cardiff, CF24 3AA, United Kingdom 16Department of Physics and Astronomy, The University of Western Ontario, 1151 Richmond Street, London, N6A 3K7, Canada 17Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency, 3-1-1 Yoshinodai, Chuo-ku, Sagamihara, Kanagawa 252-5210, Japan 18National Astronomical Observatories, Chinese Academy of Sciences, A20 Datun Road, Chaoyang District, Beijing 100012, China 19Max Planck Institute for Astronomy, Königstuhl 17, 69117 Heidelberg, Germany 20Kagoshima University, 1-21-35 Korimoto, Kagoshima, Kagoshima 890-0065, Japan 21Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA 22School of Physics and Astronomy, University of Leeds, Woodhouse Lane, Leeds LS2 9JT, UK 23The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan 24Department of Astronomy, Graduate School of Science, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan 25Institute of Astronomy, National Central University, Chung-Li 32054, Taiwan 26Department of Astronomy and Space Science, Chungnam National University, 99 Daehak-ro, Yuseong-gu, Daejeon 34134, Republic of Korea 27School of Physics, Astronomy & Mathematics, University of Hertfordshire, College Lane, Hatfield, Hertfordshire, AL10 9AB, United Kingdom 28The University of Tokyo, 3-8-1 Komaba, Meguro, Tokyo 153-8902, Japan 29Dunlap Institute for Astronomy & Astrophysics, University of Toronto, Toronto, Ontario, Canada, M5S 3H4 30Jodrell Bank Centre for Astrophysics, School of Physics and Astronomy, University of Manchester, Oxford Road, Manchester, M13 9PL, United Kingdom 31Department of Physics, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong 32National Astronomical Observatory, National Institutes of Natural Sciences, Osawa, Mitaka, Tokyo 181-8588, Japan 33Physics and Astronomy, University of Exeter, Stocker Road, Exeter, EX4 4QL, United Kingdom 34National Astronomical Observatory of Japan, 650 N. A`ohōkū Place, Hilo, HI 96720, USA 35UK Astronomy Technology Centre, Royal Observatory, Blackford Hill, Edinburgh EH9 3HJ 36Institute for Astronomy, University of Edinburgh, Royal Observatory, Blackford Hill, Edinburgh EH9 3HJ 37Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8602, Japan 38Department of Physics, Graduate School of Science, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8602, Japan 39Department of Environmental Systems Science, Doshisha University, Tatara, Miyakodani 1-3, Kyotanabe, Kyoto 610-0394, Japan 40Hiroshima Astrophysical Science Center, Hiroshima University, Kagamiyama 1-3-1, Higashi-Hiroshima, Hiroshima 739-8526, Japan 41Department of Physics, Hiroshima University, Kagamiyama 1-3-1, Higashi-Hiroshima, Hiroshima 739-8526, Japan 42Core Research for Energetic Universe (CORE-U), Hiroshima University, Kagamiyama 1-3-1, Higashi-Hiroshima, Hiroshima 739-8526, Japan 43Department of Earth Science Education, Kongju National University, 56 Gongjudaehak-ro, Gongju-si 32588, Republic of Korea 44Department of Physics, Institute for Astrophysics, Chungbuk National University, Korea 45Department of Physics and Atmospheric Science, Dalhousie University, Halifax, B3H 4R2, Canada 46School of Space Research, Kyung Hee University, 1732 Deogyeong-daero, Giheung-gu, Yongin-si, Gyeonggi-do 17104, Republic of Korea 47Xinjiang Astronomical Observatory, Chinese Academy of Sciences, 150 Science 1-Street, Urumqi 830011, Xinjiang, Chiina 48Kagawa University, Saiwai-cho 1-1, Takamatsu, Kagawa, 760-8522, Japan 49Faculty of Education, Kagawa University, Saiwai-cho 1-1, Takamatsu, Kagawa, 760-8522, Japan 50Division of Theoretical Astronomy, National Astronomical Observatory of Japan, Mitaka, Tokyo 181-8588, Japan 51SOKENDAI (The Graduate University for Advanced Studies), Hayama, Kanagawa 240-0193, Japan 52Japan Aerospace Exploration Agency, 3-1-1, Yoshinodai, Chuo-ku, Sagamihara, Kanagawa 252-5210, Japan 53Astrophysics Group, Cavendish Laboratory, J J Thomson Avenue, Cambridge, CB3 0HE, United Kingdom 54Kavli Institute for Cosmology, Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge, CB3 0HA, United Kingdom 55OSL, Physics & Astronomy Dept., University College London, WC1E 6BT, London, UK 56Astrobiology Center of NINS, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan 57National Astronomical Observatory of Japan, NINS, 2-21-1, Osawa, Mitaka, Tokyo, 181-8588, Japan 58Purple Mountain Observatory, Chinese Academy of Sciences, 2 West Beijing Road, 210008 Nanjing, PR China 59European Southern Observatory (ESO), Karl-Schwarzschild-Str. 2, D-85748 Garching, Germany 60Laboratoire AIM CEA/DSM-CNRS-Université Paris Diderot, IRFU/Service d’Astrophysique, CEA Saclay, F-91191 Gif-sur-Yvette, France 61Jet Propulsion Laboratory, M/S 169-506, 4800 Oak Grove Drive, Pasadena, CA 91109 62Department of Applied Mathematics, University of Leeds, Woodhouse Lane, Leeds LS2 9JT, UK 63RIKEN, 2-1 Hirosawa, Wako, Saitama 351-0198, [email protected] We present the first results from the B-fields In STar-forming Region Observations (BISTRO) survey, using the Sub-millimetre Common-User Bolometer Array 2 (SCUBA-2) camera, with its associated polarimeter (), on the James Clerk Maxwell Telescope (JCMT) in Hawaii. We discuss the survey's aims and objectives. We describe the rationale behind the survey, and the questions which the survey will aim to answer. The most important of these is the role of magnetic fields in the star formation process on the scale of individual filaments and cores in dense regions. We describe the data acquisition and reduction processes for , demonstrating both repeatability and consistency with previous data. We present a first-look analysis of the first results from the BISTRO survey in the OMC 1 region. We see that the magnetic field lies approximately perpendicular to the famous `integral filament' in the densest regions of that filament. Furthermore, we see an `hour-glass' magnetic field morphology extending beyond the densest region of the integral filament into the less-dense surrounding material, and discuss possible causes for this. We also discuss the more complex morphology seen along the Orion Bar region. We examine the morphology of the field along the lower-density north-eastern filament. We find consistency with previous theoretical models that predict magnetic fields lying parallel to low-density, non-self-gravitating filaments, and perpendicular to higher-density, self-gravitating filaments. § INTRODUCTION Our knowledge of the star formation process has increased dramatically due to the advent of satellites such as Spitzer and Herschel, and sensitive far-infrared and submillimeter detector arrays such as SCUBA-2. Following on from the highly-successful first-generation JCMT Legacy Surveys, including the Gould Belt Legacy Survey (GBLS; e.g. ; ; ; ; ; ; ; ; Kirk H. et al. 2016kirk2016; ; ; ), the JCMT is currently undertaking a series of second-generation surveys, using the latest instruments to be commissioned on the telescope. These include , an imaging polarimeter for SCUBA-2. One of the surveys using POL-2 is the B-fields in STar-forming Region Observations (BISTRO) Survey that we report here. This is extremely timely because magnetic fields (hereafter referred to as B-fields) are still not well understood in star formation, due to a paucity of observational evidence, despite widespread theoretical recognition of the significance of B-fields in the formation of cores (e.g.and references therein) and the evolution of proto-stars (e.g.and references therein). §.§ Observing magnetic fields The submillimeter continuum emission from dust grains is polarised because the grains tend towards alignment perpendicular to B-field lines. For asymmetric particles with some ability to be magnetized, a series of relaxation processes brings the grains towards their lowest energy rotation state. This is with the longest axis perpendicular to the field <cit.>. Hence, with material along this axis contributing more to the total far-infrared/submillimeter grain emission, linear polarization is seen perpendicular to the field. In the grain alignment process, the radiative torque that spins up irregularly shaped grains is thought to play the most significant role (e.g. ). A few percent polarization is detected astronomically, on scales from proto-stars and jets, up to giant molecular clouds. In some completely symmetric geometries the field lines cancel out so that there is a polarization null. Nevertheless, submillimeter continuum polarization surveys represent a powerful technique for tracing the plane-of-sky B-field orientation (e.g. ; ). The fractional polarization from dust yields no direct estimate of the B-field strength, since it is dependent on several additional unknowns (e.g., efficiency of grain alignment, grain shape, and composition). However, a measure of the field strength can be derived from the commonly used Chandrasekhar-Fermi (C-F) method <cit.>, and modern variants thereof (e.g. ; ), using dispersion in polarization half-vectors (where high dispersion indicates a highly turbulent velocity field and a weak mean B-field component; `half-vector' refers to the ±180 degree ambiguity in B-field direction), the line widths estimated from spectroscopic data, and the density from the SCUBA-2 flux densities (e.g. ; Kirk J. et al. 2006kirk2006). Simulations show that this estimate can be corrected for a statistical ensemble of objects to yield realistic estimates of the field strength (; ; ). In addition, the effects of multiple eddies along the line of sight have been studied by <cit.>.B-field geometries are generally inferred by preferential emission or absorption by dust or molecules, creating polarized light (e.g., ; , 2013houde2013). Polarization measurements with molecules require bright lines and are generally restricted to very dense, small-scale structures. Near-infrared absorption polarimetry requires a large sample of background stars and is generally limited to lower-density, more diffuse cloud material (; see also ; ).B-field strengths are typically measured using Zeeman splitting of paramagnetic molecules (e.g., ).While detections of Zeeman splitting in the high-density tracer CN have been made towards extremely bright sources (e.g. ), Zeeman splitting measurements are typically restricted to lower-density regions of molecular clouds, where the OH molecule is relatively highly abundant (e.g. ).In contrast, polarized far-infrared and submillimeter thermal dust emission can trace dense structures on both cloud scales and core scales. The Planck satellite has generated an all-sky submillimeter polarization map <cit.>, allowing us to trace the large-scale B-field over the entire sky. However, it is at too low resolution (∼ 4 arcmin at 857 GHz; ) to study the detailed cloud geometries in star-forming regions on the necessary scale of prestellar cores and proto-stars.At somewhat better resolution (30 arcsec at 250 ; ), the BLASTPol balloon-borne polarimeter has mapped a limited number of star-forming regions in great detail (e.g. ; ; ). §.§ Theoretical models The theoretical role played by B-fields in star formation has been much discussed (e.g. ; ; ; ; ; ; ). However, systematic surveys to measure B-fields in star-forming regions on the necessary resolution scales have proved problematic (see recent reviews by ; ).with SCUBA-2 on JCMT is a facility that can map the B-field within cold dense cores and filaments on scales of ∼1000-2000 AU in nearby star-forming regions, such as those in the Gould Belt. As such, it can provide a link between the B-field measured on arcminute scales by Planck <cit.> and BLASTPol (e.g. ) with measurements made on arcsec scales by interferometers such as the Submillimeter Array (SMA; e.g. ; ; ), Combined Array for Research in Millimeter-wave Astronomy (CARMA; e.g. ; 2014hull2014), and the Atacama Large Millimeter/submillimeter Array (ALMA; e.g. ; ). This intermediate size scale is crucial to testing theoretical models of star formation. As a result of observations made by the Herschel satellite, it is now widely believed that most low-mass stars form according to the so-called filamentary star formation model <cit.>. This model has been debated for some time. However, Herschel has shown that this appears to be the dominant star-forming mechanism for solar-type stars <cit.>. In this scenario a cloud first breaks up into filaments, and material flows onto the filaments along striations, or sub-filaments (e.g. ). A similar picture of movement of material along filaments was previously observed and inferred from a combination of spectroscopic data and simulations (e.g.– using data from ). However, this was just one region. Herschel appears to show the same mechanism in many star-forming regions.In this model the B-field aligns with the striations (i.e. perpendicular to the filaments), and helps to `funnel' matter onto the filaments.This observationally-informed paradigm has been reproduced by recent simulations of magnetized self-gravitating filaments (e.g. ; 2009inoue2009; 2012inoue2012; ; ).Cores then form on filaments, becoming gravitationally unstable and subsequently collapsing to form protostars <cit.>. We know from large-scale polarization studies, e.g. Planck and BLASTPol amonst others (see above) that large-scale fields typically lie roughly perpendicular to their associated filament direction (e.g. ; ; ; ), but we do not know what happens to the field within the dense gas of the filaments themselves, nor what happens within the cores that form in the filaments (c.f BLASTPol; ). This is crucial to understanding the physical processes taking place, and to discriminating between the models of the star formation process which properly incorporate B-fields (e.g. ; ; ). The current hypotheses are that the field may wrap around the filament in a helical manner (e.g.); turn to run parallel to the filament in the densest gas (e.g. the purely poloidal field model of ); or take on a pinched morphology perpendicular to the long axis of the filament (e.g. ; ), similar to that produced in initially magnetically-supported cores in the classical ambipolar-diffusion paradigm (e.g. ; ).Theoretical studies have shown that both B-fields (e.g., ; ) and turbulence (e.g., ; ) can significantly affect how dense structures form, collapse, and evolve in the inter-stellar medium (ISM).For example, one paradigm of low-mass star formation suggests that collapse is guided by B-fields, producing flattened cores and disks (e.g., ).This collapse (and subsequent proto-star formation) can drag and twist the field lines, amplifying the local field strength during the early stages of protostellar evolution (e.g., ; , ).These twisted lines can then have significant consequences for the emerging protostellar outflows, disks, frequency of binarity, and stellar masses (e.g., ; ; ). In fact, there is a debate over the relative importance of B-fields and turbulence in regulating the star formation process (e.g., ; ). The POL‐2 observations, combined with our existing kinematics from HARP-B (e.g. ), will allow for an investigation into the balance between gravity, turbulent support, and B-fields, over a statistically meaningful number of star-forming cores in a number of regions across the Gould Belt. Once protostars have formed, there is also a debate about the role that the B-field plays in shaping protostellar evolution, and its effect on bipolar outflows. For example, recent studies on the correlation of B-field direction with outflows, using CARMA polarization observations, found no correlation between outflow and field directions on scales below 1000 AU <cit.>. In contrast, a large-scale correlation between outflow and field directions has been found on scales of ∼10,000 AU and above <cit.>. One explanation of this apparent conflict in the field morphology uses detailed modelling of toroidally-wrapped B-fields at the centres of clouds <cit.>. This has been used to explain early disk formation in Class 0 proto-stars in a recent model in which early disks are hypothesized to preferentially be formed in fields misaligned with the outflow directions <cit.>.data are crucial to filling in the missing information on intermediate scales between ∼1000 and ∼10,000 AU. The BISTRO survey aims to address this and all of the other questions discussed above.Previously, only a few prestellar and protostellar cores have had their B-fields mapped (e.g., ; ; ; ; ; Kirk J. et al. 2006kirk2006). BISTRO will map hundreds. In this paper we describe the plan for the BISTRO survey and discuss the first results taken on OMC 1.§ AIMS AND OBJECTIVES OF THE SURVEY Previous surveys have either been piecemeal, been very restricted in sample size (e.g., ; ; ; 2014hull2014; ), or have poor resolution to detect cores and proto-stars (e.g. ).We here describe a project that aims to produce a large and unbiased survey of the B-fields in star-forming molecular material in the solar vicinity, simultaneously at 850 and 450 μm, and at relatively high resolution – 14.1 and 9.6 arcsec respectively <cit.>, or ∼1000-2000 AU at a typical Gould Belt cloud distance. The BISTRO Survey is a large-scale survey of the Gould Belt clouds that we have previously mapped in continuum and spectral lines at JCMT (e.g. ; ; ), and in the far-infrared with Herschel <cit.>.The aims of the project are: to obtain maps of polarization position angle and fractional polarization in a statistically meaningful sample of cores in numerous regions; to characterize the evidence for and relevance of the B-field and turbulence (in conjunction with previous and follow-up spectroscopic line observations) in cores and their surrounding environments; to test the predictions of low-mass star formation theories (core, filament, outflow, field geometry), and grain alignment theories; to generate a large sample of objects that are suitable for follow‐up with other instruments, such as ALMA, Nobeyama, SMA and NOEMA (NOrthern Extended Millimeter Array); and to measure the B-field strength using the C-F method in as many clouds as possible within our sample.The survey was granted an initial allocation of 224 hours of telescope time to observe 16 fields in 7 different Gould Belt clouds (Auriga, IC5146, Ophiuchus, Orion, Perseus, Serpens and Taurus).The specific fields were chosen to match those previously mapped by SCUBA-2, HARP and Herschel in the JCMT and Herschel Gould Belt Surveys (; ).§ OBSERVATIONSSCUBA-2 is an innovative 10,000-pixel submillimeter camera <cit.> that has revolutionized submillimeter astronomy in terms of its ability to carry out wide-field surveys to previously unprecedented depths (e.g., ; ). SCUBA-2 uses transition-edge super-conducting (TES) bolometer arrays, which come complete with in-focal-plane super-conducting quantum interference device (SQUID) amplifiers and multiplexed readouts, and are cooled to 100 mK by a liquid-cryogen-free dilution refrigerator <cit.>.It has two arrays, which operate simulataneously in parallel, one with filters centered at 850 μm and one at 450 μm. In this paper we discuss 850-μm data only. The polarimeter(Bastien et al. 2005abastien2005; 2005bbastien2005a; 2017; ) has an achromatic, continuously-rotating, half‐wave plate in order to modulate the signal at a faster rate (2 Hz) than atmospheric transparency fluctuations. Such a modulation improves significantly the reliability and accuracy of submillimeter polarimetric measurements. The signal is analyzed by a wire-grid polarizer. For calibration, a removable polarizer is also available. Figure 1 shows a schematic of a rotating half-wave plate polarimeter, such as the POL-2 instrument. POL-2 has three optical components, which are (in the order that the radiation encounters them): the calibration polarizer (not shown in Figure 1), the rotating half-wave plate, and the polarizer. The components are mounted in a box fixed in front of the entrance window of the main cryostat of SCUBA-2. All components are mounted so that they can be taken in and out of the beam remotely, making it very easy and fast to start polarimetry at the telescope (Bastien et al. 2005abastien2005; 2005bbastien2005a; 2017; ).The BISTRO time was allocated to take place during Band 2 weather (0.05 < τ_225 GHz < 0.08), which is typical of moderately good weather conditions on Mauna Kea.The first data were taken withon SCUBA-2 on 2016 January 11.The POL-2 polarimeter fully samples 12-arcmin diameter circular regions at a resolution of 14.1 arcsec in a version of the SCUBA-2 DAISY mapping mode <cit.> optimised forobservations <cit.>.TheDAISY scan pattern produces a central 3-arcmin diameter region of approximately even, high signal-to-noise ratio coverage, with noise increasing to the edge of the map.TheDAISY scan pattern has a scan speed of 8 arcsec/sec, with a half-wave plate rotation speed of 2 Hz <cit.>.Continuum observations are simultaneously taken at 450 with a resolution of 9.6 arcsec, but as the 450-observing mode has not yet been fully commissioned, we do not use these data in this paper.The data were reduced in a two-stage process.The raw bolometer timestreams were first converted to separate Stokes Q and Stokes U timestreams using the process calcqu in smurf <cit.>.The Q and U timestreams were then reduced separately using an iterative map-making technique, makemap in smurf <cit.> and gridded to 4-arcsec pixels.The iterations were halted when the map pixels, on average, changed by ≤ 5 per cent of the estimated map RMS noise.In order to correct for the instrumental polarization (IP), makemap is supplied with a total intensity image of the source, taken using SCUBA-2 whileis not in the beam.The IP correction is discussed in detail by Bastien et al. (2017).The total intensity image of OMC 1 presented in this paper was taken using the standard SCUBA-2 DAISY observing mode, and reduced using makemap using the same convergence criterion and pixel size as the POL-2 data.The reduced scans were combined in two stages: (1) each of the Stokes Q observations were co-added to form a mosaic Stokes Q image (the Stokes U maps were co-added similarly); (2) each of the Stokes Q and U observations were combined using the process pol2stack in smurf <cit.> to produce an output half-vector catalogue.We refer to data produced by this methods as BISTRO Internal Release 1 (IR1).The data were calibrated in Jy/beam, using an aperture flux conversion factor (FCF) of 725 mJy/pW at 850.When observing with , the standard SCUBA-2 850- FCF, of 537 Jy/beam, derived from average values of JCMT calibrators <cit.>, is increased by a factor of 1.35 due to additional losses introduced by(; Bastien et al, 2017).The OMC 1 region was observed 21 times between 2016 January 11 and 2016 January 25 in a mixture of very dry weather (Band 1; τ_225GHz≤ 0.05) and dry weather (Band 2; 0.05 ≤τ_225GHz≤ 0.08) under JCMT project reference numbers M16AL004 (BISTRO) and M15BEC02 ( commissioning).In order to determine the behaviour of RMS noise in our observations as a function of integration time, we measured the standard deviation on the Stokes Q and Stokes U values in a region with relatively constant signal in both the Stokes Q and the Stoke U maps, located between OMC 1 and the Orion Bar.This region, centred at approximately 05^h35^m21^s -05^∘23^'36^'' was chosen because it was relatively flat, moderately unpolarised, low in emission, and away from the brightest sources, and because there was no region entirely without signal in the central 3-arcminute-diameter region of the map. Figure 2 shows how the noise integrates down in this 21-repeat (∼14-hour)observation.The polarization noise in Figure 2 is seen to integrate down close to t^-0.5, as in the ideal case. The scatter of individual measurements reduces satisfactorily as the data are subsequently combined. We find that there is no evidence of any `noise floor' in long integrations.From this plot we see that this dataset has reached 2.1 mJy/beam RMS noise in 13.5 hours. A RMS noise value of ∼2 mJy/beam was set as the target value for the BISTRO survey. Appendices A & B list a series of tests that we carried out to confirm the repeatability of our measurements and to demonstrate consistency with previous data.§ FIRST DATA FROM THE SURVEY Figure 3 shows a polarization map taken with POL-2 of the OMC 1 region of the `integral filament' in the Orion A molecular cloud, with half-vectors rotated by 90 degrees to trace the B-field direction. Only vectors with a signal-to-noise ratio of 3 or greater in polarisation fraction are shown (i.e. P/DP ≥ 3). The Orion A molecular cloud is a well-resolved and well-studied region of high-mass star formation (e.g. ; ).It is the closest region of high-mass star formation, located at a distance of 388± 5 pc <cit.>.The half-vector lengths show the percentage polarization, with a 5% scale bar in the corner to give the calibration.The underlying image is an 850 total intensity map of the same region taken using SCUBA-2.The `integral filament' <cit.> can be seen running roughly north-south through the region. The brightest part of the filament lies just south of the centre of the image. The two brightest and most massive regions in the filament are the northern Becklin-Neugebauer Kleinmann-Low (BN/KL) object (; ) and the southern Orion South clump (; ). Both are seen in Figure <ref>. In the southeast part of the map the `Orion Bar' photon-dominated region (PDR) extends from the centre of the foot of the map in a roughly northeasterly direction.In the brightest central part of the filament, the B-field direction, as indicated by the half-vectors, appears to lie roughly orthogonal to the main axis of the filament. This pattern continues on the main axis line of the filament over most of the length of the filament. More particularly, on the brightest part of the filament the orientation of the long axis of the filament is estimated to be +11.0 ± 1.5 degrees, whilst the calculated B-field direction is -64.2 ± 6.5 degrees (both measured north through east; note that there is a 180-degree ambiguity on the B-field direction), yielding a difference of 75.2 ± 6.7 degrees.The filament direction was estimated by performing a linear regression on the coordinates of 12 bright peaks of submillimeter emission located along the linear portion of the integral filament, as observed in the JCMT GBS 850 SCUBA-2 data.The field direction was estimated by taking the mean of the position angles of the B-field half-vectors in the region of uniform field direction in the centre of the OMC 1 region, between the Orion BN/KL and S clumps.However, away from the central axis of the filament the field appears to curve to either side. In the northern half of the filament the field appears to curve northwards, delineating a roughly `U' shape, centred on the filament.In the southern half of the filament the field appears to curve to the south, forming an inverted `U' shape.This so-called `hour-glass' morphology was first noted by <cit.> at much lower resolution and signal-to-noise ratio, observing at 100 and 350 with the Kuiper Airborne Observatory (KAO) and the Caltech Submillimeter Observatory (CSO) respectively. However, we note a far higher degree of curvature of the field lines than was seen by <cit.>.There is a slight degree of de-polarisation visible towards the centres of the BN-KL and Orion-S clumps. This is a well-known effect resulting from tangled fields in the centers of very dense regions (e.g. ). The pattern along the Orion Bar appears somewhat more complex. Furthermore, in the north-eastern section of the map there is a region of half-vectors that appear to follow a different pattern. Here the half-vectors seem to be running along a different filament. All of the above is consistent with the much lower signal-to-noise-ratio data of <cit.> and <cit.>. The interferometry data of <cit.> on the peaks of OMC 1 are also consistent with our data. We now discuss all of these features.§ DISCUSSION Herschel has shown that the dominant formation mechanism for prestellar cores is core formation along filaments <cit.>, revealing several examples of large-scale filaments lying perpendicular to the (plane-of-sky) B-field directions, as measured with large-scale absorption polarimetry (e.g., ). This is consistent with findings from previous emission polarization measurements from SCUPOL on SCUBA (e.g. ) and more recent large-scale polarization emission data from BLASTPol (e.g ).Based on these examples, a model has emerged whereby collapse occurs first along field lines to form filaments, and then along filaments to form cores <cit.>. In the lower density regions around the main filament, typically striations (or sub-filaments) are seen parallel to the B-field <cit.>.The polarization pattern we have observed in OMC 1 in Figure 3 follows this theoretical picture on-axis. The main part of the integral filament containing the BN-KL object and Orion South has a B-field direction apparently roughly orthogonal to the main filament direction, as mentioned above.However, our wide-field data also allow us to trace the B-field direction off-axis, and it is here that even more interesting behaviour is seen, as noted above, with a roughly `hour-glass' morphology.If we follow this theoretical picture, then we would predict that the field lines started out roughly orthogonal to the filament in the lower density as well as the higher density material, in a more uniform configuration, and was subsequently distorted into its current configuration. There appear to be two possibilities as to how the hourglass morphology could have formed. One possibility is that the motion of the denser central material along the filament axis pulled the B-field lines into this configuration as predicted by the model (see Figure 9(a) of ). Another possibility is that the well-known BN-KL outflow <cit.> caused the field lines in the lower density peripheral material to deviate from their original orientation. The effect of the highly-collimated central part of the BN/KL outflow on the B-field on arcsecond scales is discussed by <cit.>.We note that the outflow has a wide opening angle, and high-velocity wings with multiple ejecta, often referred to as the `bullets of Orion' <cit.>.The central point of the outflow coincides with the position of the BN/KL object, the northern submillimetre-bright region in Figure <ref>.Consequently, the position and opening angle of the outflow roughly match the central part of the hour-glass pattern, as well as the angle between the U-shape and the inverted-U-shape fields, as if the outflow had pushed aside the field.Further work is required to decide which of these scenarios is correct.A close-up of the Orion Bar region is shown in Figure 4. Here we see that the field follows a more complex morphology. At the southern end of the Bar the field appears to be running north-south. In the middle of the Bar the field runs roughly east-west. In the northern part of the Bar the field appears to turn again to run in a north-easterly direction.This complex pattern clearly indicates a complex field structure.One possibility is of a field that is simply twisting along the PDR front.Close examination of the Bar does appear to show the Bar twisting roughly in line with the field direction.Another possibility is that the field is running helically around the Orion Bar. In such complex cases as this it is often difficult to determine which of a number of different three-dimensional scenarios is being projected onto our two-dimensional field of view (see, e.g. ). However, the simulations produced by <cit.> show that a helical field could produce the polarization pattern that we are seeing.Figure 5 shows a close-up of the north-eastern filament that runs in a roughly east-west direction, and is roughly orthogonal to the main integral filament. This is reminiscent of the sub-filaments, or striations, seen in Taurus <cit.>, which lie perpendicular to the main filament. Figure 5 shows that the B-field lies roughly parallel to this sub-filament, again as seen in Taurus <cit.>. Similar behaviour is also seen in the low-density striations in the Polaris Flare region (; ).Furthermore, the B-field pattern lying along the north-eastern filament appears to lie in the foreground relative to the hour-glass field. Both north and south of the north-eastern filament the field lies in a direction running northeast-southwest, as if it continues behind the north-eastern filament. Hence, we hypothesise that the north-eastern filament is foreground to the rest of the cloud.This behaviour of parallel versus perpendicular field geometries is predicted theoretically. For example, numerous studies of non-self-gravitating (i.e. low density) filaments see B-fields lying parallel to filaments – essentially by running simulations without gravity (e.g. ; ; ).<cit.> include self-gravity and see `elongated condensations [i.e. dense filaments] that are generally perpendicular to the large-scale field'.More recently, <cit.> studied in detail the effects of varying the B-field strength in a filament, as well as varying the density of the filament. They found that field lines are preferentially perpendicular to the filaments above a certain critical density and parallel to the filaments below this density.This is exactly what we see here – the field is running parallel to the low-density north-eastern filament, and perpendicular to the high-density integral filament (c.f. Figure 1 of ). Incidentally, <cit.> find field lines perpendicular to filaments only in intermediate-strength and high-strength field cases. This would tend to indicate that the field we are observing in Orion is relatively strong.§ SUMMARY In this paper we have introduced the BISTRO (B-Fields in STar-Forming Region Observations) survey, which will map the dense regions of many nearby star-forming clouds with thepolarimeter and SCUBA-2 on the JCMT. We have described the rationale behind the survey, and the scientific questions which the survey will answer. The most important of these is the role of B-fields in the star formation process on small scales and in dense regions, and its importance relative to other processes, such as turbulent or non-thermal motions of the gas.We have described the data acquisition and reduction processes for , demonstrating that the RMS noise on BISTROobservations decreases as t^-0.5 as expected. We presented the firstpolarization map from the BISTRO survey, which is of the OMC 1 region of Orion A, and showed compatibility with previous observations, as well as repeatability of the POL-2 results.We saw that the field lies perpendicular to the integral filament in the densest regions of that filament. Furthermore, we saw an hour-glass B-field morphology extending beyond the densest region of the integral filament into the less-dense surrounding material, and discussed possible causes for this. We observed a more complex morphology along the Orion Bar. We examined the morphology of the field along the lower-density north-eastern filament. We found consistency with previous theoretical models that predict B-fields lying parallel to low-density, non-self-gravitating filaments, and perpendicular to higher-density, self-gravitating filaments. § ACKNOWLEDGEMENTS The James Clerk Maxwell Telescope is operated by the East Asian Observatory on behalf of The National Astronomical Observatory of Japan, Academia Sinica Institute of Astronomy and Astrophysics, the Korea Astronomy and Space Science Institute, the National Astronomical Observatories of China and the Chinese Academy of Sciences (Grant No. XDB09000000), with additional funding support from the Science and Technology Facilities Council of the United Kingdom and participating universities in the United Kingdom and Canada.Additional funds for the construction of SCUBA-2 and POL-2 were provided by the Canada Foundation for Innovation.The data taken in this paper were observed under project codes M15BEC02 and M16AL004.DWT and KP acknowledge Science and Technology Facilities Council (STFC) support under grant numbers ST/K002023/1 and ST/M000877/1.WK, MK, CWL and SSL were supported by Basic Science Research Program through the National Research Foundation of Korea (NRF), funded by the Ministry ofScience, ICT & Future Planning (WK: NRF-2016R1C1B2013642; MK: NRF-2015R1C1A1A01052160, SSL: NRF-2016R1C1B2006697) and the Ministry of Education, Science and Technology (CWL: NRF-2016R1A2B4012593).AP acknowledges the financial support provided by a Canadian Institute for Theoretical Astrophysics (CITA) National Fellowship.JCM acknowledges support from the European Research Council under the European Community's Horizon 2020 framework program (2014-2020) via the ERC Consolidator grant `From Cloud to Star Formation (CSF)' (project number 648505).This research has made use of the NASA Astrophysics Data System. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Mauna Kea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain.Facilities: James Clerk Maxwell Telescope (JCMT)Software: Starlink <cit.>, smurf (; ), Interactive Data Language (IDL) aasjournalnatexlab#1#1[Allen & Burton(1993)]allen1993 Allen, D. A., & Burton, M. G. 1993, , 363, 54[André et al.(2014)André, Di Francesco, Ward-Thompson, Inutsuka, Pudritz, & Pineda]andre2014 André, P., Di Francesco, J., Ward-Thompson, D., et al. 2014, Protostars and Planets VI, 27[André et al.(2010)André, Men'shchikov, Bontemps, Könyves, Motte, Schneider, Didelon, Minier, Saraceno, Ward-Thompson, Di Francesco, White, Molinari, Testi, Abergel, Griffin, Henning, Royer, Merín, Vavrek, Attard, Arzoumanian, Wilson, Ade, Aussel, Baluteau, Benedettini, Bernard, Blommaert, Cambrésy, Cox, di Giorgio, Hargrave, Hennemann, Huang, Kirk, Krause, Launhardt, Leeks, Le Pennec, Li, Martin, Maury, Olofsson, Omont, Peretto, Pezzuto, Prusti, Roussel, Russeil, Sauvage, Sibthorpe, Sicilia-Aguilar, Spinoglio, Waelkens, Woodcraft, & Zavagno]andre2010 André, P., Men'shchikov, A., Bontemps, S., et al. 2010, , 518, L102[Bally(2008)]bally2008 Bally, J. 2008, Overview of the Orion Complex, ed. B. Reipurth, 459[Bally et al.(1987)Bally, Langer, Stark, & Wilson]bally1987 Bally, J., Langer, W. D., Stark, A. A., & Wilson, R. W. 1987, , 312, L45[Balsara et al.(2001)Balsara, Ward-Thompson, & Crutcher]balsara2001 Balsara, D., Ward-Thompson, D., & Crutcher, R. M. 2001, , 327, 715[Bastien et al.(2005a)Bastien, Bissonnette, Ade, Pisano, Savini, Jenness, Johnstone, & Matthews]bastien2005 Bastien, P., Bissonnette, É., Ade, P., et al. 2005a, , 99, 133[Bastien et al.(2005b)Bastien, Jenness, & Molnar]bastien2005a Bastien, P., Jenness, T., & Molnar, J. 2005b, in Astronomical Society of the Pacific Conference Series, Vol. 343, Astronomical Polarimetry: Current Status and Future Directions, ed. A. Adamson, C. Aspin, C. Davis, & T. Fujiyoshi, 69[Basu et al.(2009)Basu, Ciolek, Dapp, & Wurster]basu2009 Basu, S., Ciolek, G. E., Dapp, W. B., & Wurster, J. 2009, , 14, 483[Batrla et al.(1983)Batrla, Wilson, Ruf, & Bastien]batrla1983 Batrla, W., Wilson, T. L., Ruf, K., & Bastien, P. 1983, , 128, 279[Becklin & Neugebauer(1967)]becklin1967 Becklin, E. E., & Neugebauer, G. 1967, , 147, 799[Berry et al.(2005)Berry, Gledhill, Greaves, & Jenness]berry2005 Berry, D. S., Gledhill, T. M., Greaves, J. S., & Jenness, T. 2005, in Astronomical Society of the Pacific Conference Series, Vol. 343, Astronomical Polarimetry: Current Status and Future Directions, ed. A. Adamson, C. Aspin, C. Davis, & T. Fujiyoshi, 71[Buckle et al.(2010)Buckle, Curtis, Roberts, White, Hatchell, Brunt, Butner, Cavanagh, Chrysostomou, Davis, Duarte-Cabral, Etxaluze, Di Francesco, Friberg, Friesen, Fuller, Graves, Greaves, Hogerheijde, Johnstone, Matthews, Matthews, Nutter, Rawlings, Richer, Sadavoy, Simpson, Tothill, Tsamis, Viti, Ward-Thompson, Wouterloot, & Yates]buckle2010 Buckle, J. V., Curtis, E. I., Roberts, J. F., et al. 2010, , 401, 204[Buckle et al.(2015)Buckle, Drabek-Maunder, Greaves, Richer, Matthews, Johnstone, Kirk, Beaulieu, Berry, Broekhoven-Fiene, Currie, Fich, Hatchell, Jenness, Mottram, Nutter, Pattle, Pineda, Salji, Tisi, Francesco, Hogerheijde, Ward-Thompson, Bastien, Butner, Chen, Chrysostomou, Coude, Davis, Duarte-Cabral, Friberg, Friesen, Fuller, Graves, Gregson, Holland, Joncas, Kirk, Knee, Mairs, Marsh, Moriarty-Schieven, Rawlings, Rosolowsky, Rumble, Sadavoy, Thomas, Tothill, Viti, White, Wilson, Wouterloot, Yates, & Zhu]buckle2015 Buckle, J. V., Drabek-Maunder, E., Greaves, J., et al. 2015, , 449, 2472[Burge et al.(2016)Burge, Van Loo, Falle, & Hartquist]burge2016 Burge, C. A., Van Loo, S., Falle, S. A. E. G., & Hartquist, T. W. 2016, , 596, A28[Chandrasekhar & Fermi(1953)]chandrasekhar1953 Chandrasekhar, S., & Fermi, E. 1953, , 118, 113[Chapin et al.(2013)Chapin, Berry, Gibb, Jenness, Scott, Tilanus, Economou, & Holland]chapin2013 Chapin, E. L., Berry, D. S., Gibb, A. G., et al. 2013, , 430, 2545[Chapman et al.(2013)Chapman, Davidson, Goldsmith, Houde, Kwon, Li, Looney, Matthews, Matthews, Novak, Peng, Vaillancourt, & Volgenau]chapman2013 Chapman, N. L., Davidson, J. A., Goldsmith, P. F., et al. 2013, , 770, 151[Chen et al.(2012)Chen, Rao, Wilner, & Liu]chen2012 Chen, H.-R., Rao, R., Wilner, D. J., & Liu, S.-Y. 2012, , 751, L13[Chen et al.(2016)Chen, Di Francesco, Johnstone, Sadavoy, Hatchell, Mottram, Kirk, Buckle, Berry, Broekhoven-Fiene, Currie, Fich, Jenness, Nutter, Pattle, Pineda, Quinn, Salji, Tisi, Hogerheijde, Ward-Thompson, Bastien, Bresnahan, Butner, Chrysostomou, Coude, Davis, Drabek-Maunder, Duarte-Cabral, Fiege, Friberg, Friesen, Fuller, Graves, Greaves, Gregson, Holland, Joncas, Kirk, Knee, Mairs, Marsh, Matthews, Moriarty-Schieven, Mowat, Pezzuto, Rawlings, Richer, Robertson, Rosolowsky, Rumble, Schneider-Bontemps, Thomas, Tothill, Viti, White, Wouterloot, Yates, & Zhu]chen2016 Chen, M. C.-Y., Di Francesco, J., Johnstone, D., et al. 2016, , 826, 95[Cho & Lazarian(2007)]cho2007 Cho, J., & Lazarian, A. 2007, , 669, 1085[Cho & Yoo(2016)]cho2016 Cho, J., & Yoo, H. 2016, , 821, 21[Cortes et al.(2016)Cortes, Girart, Hull, Sridharan, Louvet, Plambeck, Li, Crutcher, & Lai]cortes2016 Cortes, P. C., Girart, J. M., Hull, C. L. H., et al. 2016, , 825, L15[Crutcher(2012)]crutcher2012 Crutcher, R. M. 2012, , 50, 29[Crutcher et al.(2004)Crutcher, Nutter, Ward-Thompson, & Kirk]crutcher2004 Crutcher, R. M., Nutter, D. J., Ward-Thompson, D., & Kirk, J. M. 2004, , 600, 279[Crutcher et al.(1996)Crutcher, Troland, Lazareff, & Kazes]crutcher1996 Crutcher, R. M., Troland, T. H., Lazareff, B., & Kazes, I. 1996, , 456, 217[Crutcher et al.(2010)Crutcher, Wandelt, Heiles, Falgarone, & Troland]crutcher2010 Crutcher, R. M., Wandelt, B., Heiles, C., Falgarone, E., & Troland, T. H. 2010, , 725, 466[Currie et al.(2014)Currie, Berry, Jenness, Gibb, Bell, & Draper]currie2014 Currie, M. J., Berry, D. S., Jenness, T., et al. 2014, in Astronomical Society of the Pacific Conference Series, Vol. 485, Astronomical Data Analysis Software and Systems XXIII, ed. N. Manset & P. Forshay, 39[Dempsey et al.(2013)Dempsey, Friberg, Jenness, Tilanus, Thomas, Holland, Bintley, Berry, Chapin, Chrysostomou, Davis, Gibb, Parsons, & Robson]dempsey2013 Dempsey, J. T., Friberg, P., Jenness, T., et al. 2013, , 430, 2534[Dotson et al.(2010)Dotson, Vaillancourt, Kirby, Dowell, Hildebrand, & Davidson]dotson2010 Dotson, J. L., Vaillancourt, J. E., Kirby, L., et al. 2010, , 186, 406[Falceta-Gonçalves et al.(2008)Falceta-Gonçalves, Lazarian, & Kowal]falceta-goncalves2008 Falceta-Gonçalves, D., Lazarian, A., & Kowal, G. 2008, , 679, 537[Fiege & Pudritz(2000a)]fiege2000 Fiege, J. D., & Pudritz, R. E. 2000a, , 311, 85[Fiege & Pudritz(2000b)]fiege2000a —. 2000b, , 311, 105[Fissel(2015)]fissel2015 Fissel, L. M. 2015, IAU General Assembly, 22, 2257545[Fissel et al.(2016)Fissel, Ade, Angilè, Ashton, Benton, Devlin, Dober, Fukui, Galitzki, Gandilo, Klein, Korotkov, Li, Martin, Matthews, Moncelsi, Nakamura, Netterfield, Novak, Pascale, Poidevin, Santos, Savini, Scott, Shariff, Diego Soler, Thomas, Tucker, Tucker, & Ward-Thompson]fissel2016 Fissel, L. M., Ade, P. A. R., Angilè, F. E., et al. 2016, , 824, 134[Franzmann & Fiege(2017)]franzmann2017 Franzmann, E. L., & Fiege, J. D. 2017, , doi:10.1093/mnras/stw3380[Friberg et al.(2016)Friberg, Bastien, Berry, Savini, Graves, & Pattle]friberg2016 Friberg, P., Bastien, P., Berry, D., et al. 2016, Proc. SPIE, 9914, 991403[Galli & Shu(1993)]galli1993 Galli, D., & Shu, F. H. 1993, , 417, 220[Girart et al.(2006)Girart, Rao, & Marrone]girart2006 Girart, J. M., Rao, R., & Marrone, D. P. 2006, Science, 313, 812[Goodman et al.(1990)Goodman, Bastien, Menard, & Myers]goodman1990 Goodman, A. A., Bastien, P., Menard, F., & Myers, P. C. 1990, , 359, 363[Graves et al.(2010)Graves, Richer, Buckle, Duarte-Cabral, Fuller, Hogerheijde, Owen, Brunt, Butner, Cavanagh, Chrysostomou, Curtis, Davis, Etxaluze, Francesco, Friberg, Friesen, Greaves, Hatchell, Johnstone, Matthews, Matthews, Matzner, Nutter, Rawlings, Roberts, Sadavoy, Simpson, Tothill, Tsamis, Viti, Ward-Thompson, White, Wouterloot, & Yates]graves2010 Graves, S. F., Richer, J. S., Buckle, J. V., et al. 2010, , 409, 1412[Greaves et al.(2003)Greaves, Holland, Jenness, Chrysostomou, Berry, Murray, Tamura, Robson, Ade, Nartallo, Stevens, Momose, Morino, Moriarty-Schieven, Gannaway, & Haynes]greaves2003a Greaves, J. S., Holland, W. S., Jenness, T., et al. 2003, , 340, 353[Haschick & Baan(1989)]haschick1989 Haschick, A. D., & Baan, W. A. 1989, , 339, 949[Heitsch et al.(2011)Heitsch, Naab, & Walch]heitsch2011 Heitsch, F., Naab, T., & Walch, S. 2011, , 415, 271[Heitsch et al.(2001)Heitsch, Zweibel, Mac Low, Li, & Norman]heitsch2001 Heitsch, F., Zweibel, E. G., Mac Low, M.-M., Li, P., & Norman, M. L. 2001, , 561, 800[Hennebelle & Fromang(2008)]hennebelle2008b Hennebelle, P., & Fromang, S. 2008, , 477, 9[Hennebelle & Teyssier(2008)]hennebelle2008a Hennebelle, P., & Teyssier, R. 2008, , 477, 25[Hildebrand et al.(2009)Hildebrand, Kirby, Dotson, Houde, & Vaillancourt]hildebrand2009 Hildebrand, R. H., Kirby, L., Dotson, J. L., Houde, M., & Vaillancourt, J. E. 2009, , 696, 567[Holland et al.(2006)Holland, MacIntosh, Fairley, Kelly, Montgomery, Gostick, Atad-Ettedgui, Ellis, Robson, Hollister, Woodcraft, Ade, Walker, Irwin, Hilton, Duncan, Reintsema, Walton, Parkes, Dunare, Fich, Kycia, Halpern, Scott, Gibb, Molnar, Chapin, Bintley, Craig, Chylek, Jenness, Economou, & Davis]holland2006 Holland, W., MacIntosh, M., Fairley, A., et al. 2006, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 6275, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series[Holland et al.(1999)Holland, Robson, Gear, Cunningham, Lightfoot, Jenness, Ivison, Stevens, Ade, Griffin, Duncan, Murphy, & Naylor]holland1999 Holland, W. S., Robson, E. I., Gear, W. K., et al. 1999, , 303, 659[Holland et al.(2013)Holland, Bintley, Chapin, Chrysostomou, Davis, Dempsey, Duncan, Fich, Friberg, Halpern, Irwin, Jenness, Kelly, MacIntosh, Robson, Scott, Ade, Atad-Ettedgui, Berry, Craig, Gao, Gibb, Hilton, Hollister, Kycia, Lunney, McGregor, Montgomery, Parkes, Tilanus, Ullom, Walther, Walton, Woodcraft, Amiri, Atkinson, Burger, Chuter, Coulson, Doriese, Dunare, Economou, Niemack, Parsons, Reintsema, Sibthorpe, Smail, Sudiwala, & Thomas]holland2013 Holland, W. S., Bintley, D., Chapin, E. L., et al. 2013, , 430, 2513[Houde et al.(2004)Houde, Dowell, Hildebrand, Dotson, Vaillancourt, Phillips, Peng, & Bastien]houde2004 Houde, M., Dowell, C. D., Hildebrand, R. H., et al. 2004, , 604, 717[Houde et al.(2013)Houde, Hezareh, Jones, & Rajabi]houde2013 Houde, M., Hezareh, T., Jones, S., & Rajabi, F. 2013, , 764, 24[Houde et al.(2009)Houde, Vaillancourt, Hildebrand, Chitsazzadeh, & Kirby]houde2009 Houde, M., Vaillancourt, J. E., Hildebrand, R. H., Chitsazzadeh, S., & Kirby, L. 2009, , 706, 1504[Hull et al.(2013)Hull, Plambeck, Bolatto, Bower, Carpenter, Crutcher, Fiege, Franzmann, Hakobian, Heiles, Houde, Hughes, Jameson, Kwon, Lamb, Looney, Matthews, Mundy, Pillai, Pound, Stephens, Tobin, Vaillancourt, Volgenau, & Wright]hull2013 Hull, C. L. H., Plambeck, R. L., Bolatto, A. D., et al. 2013, , 768, 159[Hull et al.(2014)Hull, Plambeck, Kwon, Bower, Carpenter, Crutcher, Fiege, Franzmann, Hakobian, Heiles, Houde, Hughes, Lamb, Looney, Marrone, Matthews, Pillai, Pound, Rahman, Sandell, Stephens, Tobin, Vaillancourt, Volgenau, & Wright]hull2014 Hull, C. L. H., Plambeck, R. L., Kwon, W., et al. 2014, , 213, 13[Inoue & Inutsuka(2008)]inoue2008 Inoue, T., & Inutsuka, S.-i. 2008, , 687, 303[Inoue & Inutsuka(2009)]inoue2009 —. 2009, , 704, 161[Inoue & Inutsuka(2012)]inoue2012 —. 2012, , 759, 35[Inutsuka et al.(2015)Inutsuka, Inoue, Iwasaki, & Hosokawa]inutsuka2015 Inutsuka, S.-i., Inoue, T., Iwasaki, K., & Hosokawa, T. 2015, , 580, A49[Kirk et al.(2016)Kirk, Di Francesco, Johnstone, Duarte-Cabral, Sadavoy, Hatchell, Mottram, Buckle, Berry, Broekhoven-Fiene, Currie, Fich, Jenness, Nutter, Pattle, Pineda, Quinn, Salji, Tisi, Hogerheijde, Ward-Thompson, Bastien, Bresnahan, Butner, Chen, Chrysostomou, Coude, Davis, Drabek-Maunder, Fiege, Friberg, Friesen, Fuller, Graves, Greaves, Gregson, Holland, Joncas, Kirk, Knee, Mairs, Marsh, Matthews, Moriarty-Schieven, Mowat, Rawlings, Richer, Robertson, Rosolowsky, Rumble, Thomas, Tothill, Viti, White, Wouterloot, Yates, & Zhu]kirk2016 Kirk, H., Di Francesco, J., Johnstone, D., et al. 2016, , 817, 167[Kirk et al.(2006)Kirk, Ward-Thompson, & Crutcher]kirk2006 Kirk, J. M., Ward-Thompson, D., & Crutcher, R. M. 2006, , 369, 1445[Kleinmann & Low(1967)]kleinmann1967 Kleinmann, D. E., & Low, F. J. 1967, , 149, L1[Klessen et al.(2000)Klessen, Heitsch, & Mac Low]klessen2000 Klessen, R. S., Heitsch, F., & Mac Low, M.-M. 2000, , 535, 887[Kounkel et al.(2017)Kounkel, Hartmann, Loinard, Ortiz-León, Mioduszewski, Rodríguez, Dzib, Torres, Pech, Galli, Rivera, Boden, Evans, Briceño, & Tobin]kounkel2017 Kounkel, M., Hartmann, L., Loinard, L., et al. 2017, , 834, 142[Kwon et al.(2015)Kwon, Tamura, Hough, Nakajima, Nishiyama, Kusakabe, Nagata, & Kandori]kwon2015 Kwon, J., Tamura, M., Hough, J. H., et al. 2015, , 220, 17[Lazarian & Hoang(2008)]lazarian2008 Lazarian, A., & Hoang, T. 2008, , 676, L25[Li et al.(2014)Li, Goodman, Sridharan, Houde, Li, Novak, & Tang]li2014 Li, H.-B., Goodman, A., Sridharan, T. K., et al. 2014, Protostars and Planets VI, 101[Li et al.(2011)Li, Krasnopolsky, & Shang]li2011 Li, Z.-Y., Krasnopolsky, R., & Shang, H. 2011, , 738, 180[Li & Nakamura(2004)]li2004 Li, Z.-Y., & Nakamura, F. 2004, , 609, L83[Li et al.(2010)Li, Wang, Abel, & Nakamura]li2010 Li, Z.-Y., Wang, P., Abel, T., & Nakamura, F. 2010, , 720, L26[Mac Low & Klessen(2004)]maclow2004 Mac Low, M.-M., & Klessen, R. S. 2004, Reviews of Modern Physics, 76, 125[Machida et al.(2011)Machida, Inutsuka, & Matsumoto]machida2011 Machida, M. N., Inutsuka, S.-I., & Matsumoto, T. 2011, , 63, 555[Machida et al.(2005)Machida, Matsumoto, Tomisaka, & Hanawa]machida2005 Machida, M. N., Matsumoto, T., Tomisaka, K., & Hanawa, T. 2005, , 362, 369[Mairs et al.(2016)Mairs, Johnstone, Kirk, Buckle, Berry, Broekhoven-Fiene, Currie, Fich, Graves, Hatchell, Jenness, Mottram, Nutter, Pattle, Pineda, Salji, Di Francesco, Hogerheijde, Ward-Thompson, Bastien, Bresnahan, Butner, Chen, Chrysostomou, Coudé, Davis, Drabek-Maunder, Duarte-Cabral, Fiege, Friberg, Friesen, Fuller, Greaves, Gregson, Holland, Joncas, Kirk, Knee, Marsh, Matthews, Moriarty-Schieven, Mowat, Rawlings, Richer, Robertson, Rosolowsky, Rumble, Sadavoy, Thomas, Tothill, Viti, White, Wouterloot, Yates, & Zhu]mairs2016 Mairs, S., Johnstone, D., Kirk, H., et al. 2016, , 461, 4022[Matthews et al.(2009)Matthews, McPhee, Fissel, & Curran]matthews2009 Matthews, B. C., McPhee, C. A., Fissel, L. M., & Curran, R. L. 2009, , 182, 143[Matthews & Wilson(2002)]matthews2002 Matthews, B. C., & Wilson, C. D. 2002, , 574, 822[Matthews et al.(2001)Matthews, Wilson, & Fiege]matthews2001 Matthews, B. C., Wilson, C. D., & Fiege, J. D. 2001, , 562, 400[Matthews et al.(2014)Matthews, Ade, Angilè, Benton, Chapin, Chapman, Devlin, Fissel, Fukui, Gandilo, Gundersen, Hargrave, Klein, Korotkov, Moncelsi, Mroczkowski, Netterfield, Novak, Nutter, Olmi, Pascale, Poidevin, Savini, Scott, Shariff, Soler, Tachihara, Thomas, Truch, Tucker, Tucker, & Ward-Thompson]matthews2014 Matthews, T. G., Ade, P. A. R., Angilè, F. E., et al. 2014, , 784, 116[Mouschovias(1991)]mouschovias1991 Mouschovias, T. C. 1991, , 373, 169[Nagai et al.(2016)Nagai, Nakanishi, Paladino, Hull, Cortes, Moellenbrock, Fomalont, Asada, & Hada]nagai2016 Nagai, H., Nakanishi, K., Paladino, R., et al. 2016, , 824, 132[Nakamura & Li(2005)]nakamura2005 Nakamura, F., & Li, Z.-Y. 2005, , 631, 411[Nakamura & Li(2008)]nakamura2008 —. 2008, , 687, 354[O'Dell et al.(2008)O'Dell, Muench, Smith, & Zapata]odell2008 O'Dell, C. R., Muench, A., Smith, N., & Zapata, L. 2008, Star Formation in the Orion Nebula II: Gas, Dust, Proplyds and Outflows, ed. B. Reipurth, 544[Ostriker et al.(2001)Ostriker, Stone, & Gammie]ostriker2001 Ostriker, E. C., Stone, J. M., & Gammie, C. F. 2001, , 546, 980[Padoan & Nordlund(1999)]padoan1999 Padoan, P., & Nordlund, Å. 1999, , 526, 279[Padoan & Nordlund(2002)]padoan2002 —. 2002, , 576, 870[Palmeirim et al.(2013)Palmeirim, André, Kirk, Ward-Thompson, Arzoumanian, Könyves, Didelon, Schneider, Benedettini, Bontemps, Di Francesco, Elia, Griffin, Hennemann, Hill, Martin, Men'shchikov, Molinari, Motte, Nguyen Luong, Nutter, Peretto, Pezzuto, Roy, Rygl, Spinoglio, & White]palmeirim2013 Palmeirim, P., André, P., Kirk, J., et al. 2013, , 550, A38[Panopoulou et al.(2016)Panopoulou, Psaradaki, & Tassis]panopoulou2016 Panopoulou, G. V., Psaradaki, I., & Tassis, K. 2016, , 462, 1517[Pascale et al.(2008)Pascale, Ade, Bock, Chapin, Chung, Devlin, Dicker, Griffin, Gundersen, Halpern, Hargrave, Hughes, Klein, MacTavish, Marsden, Martin, Martin, Mauskopf, Netterfield, Olmi, Patanchon, Rex, Scott, Semisch, Thomas, Truch, Tucker, Tucker, Viero, & Wiebe]pascale2008 Pascale, E., Ade, P. A. R., Bock, J. J., et al. 2008, , 681, 400[Pattle et al.(2015)Pattle, Ward-Thompson, Kirk, White, Drabek-Maunder, Buckle, Beaulieu, Berry, Broekhoven-Fiene, Currie, Fich, Hatchell, Kirk, Jenness, Johnstone, Mottram, Nutter, Pineda, Quinn, Salji, Tisi, Walker-Smith, Francesco, Hogerheijde, André, Bastien, Bresnahan, Butner, Chen, Chrysostomou, Coude, Davis, Duarte-Cabral, Fiege, Friberg, Friesen, Fuller, Graves, Greaves, Gregson, Griffin, Holland, Joncas, Knee, Könyves, Mairs, Marsh, Matthews, Moriarty-Schieven, Rawlings, Richer, Robertson, Rosolowsky, Rumble, Sadavoy, Spinoglio, Thomas, Tothill, Viti, Wouterloot, Yates, & Zhu]pattle2015 Pattle, K., Ward-Thompson, D., Kirk, J. M., et al. 2015, , 450, 1094[Pattle et al.(2017)Pattle, Ward-Thompson, Kirk, Di Francesco, Kirk, Mottram, Keown, Buckle, Beaulieu, Berry, Broekhoven-Fiene, Currie, Fich, Hatchell, Jenness, Johnstone, Nutter, Pineda, Quinn, Salji, Tisi, Walker-Smith, Hogerheijde, Bastien, Bresnahan, Butner, Chen, Chrysostomou, Coudé, Davis, Drabek-Maunder, Duarte-Cabral, Fiege, Friberg, Friesen, Fuller, Graves, Greaves, Gregson, Holland, Joncas, Knee, Mairs, Marsh, Matthews, Moriarty-Schieven, Mowat, Rawlings, Richer, Robertson, Rosolowsky, Rumble, Sadavoy, Thomas, Tothill, Viti, White, Wouterloot, Yates, & Zhu]pattle2017 —. 2017, , 464, 4255[Planck Collaboration et al.(2015)Planck Collaboration, Ade, Aghanim, Alina, Alves, Armitage-Caplan, Arnaud, Arzoumanian, Ashdown, Atrio-Barandela, & et al.]planck2015 Planck Collaboration, Ade, P. A. R., Aghanim, N., et al. 2015, , 576, A104[Planck HFI Core Team(2011)]planckhfi2011 Planck HFI Core Team. 2011, , 536, A4[Price & Bate(2007)]price2007 Price, D. J., & Bate, M. R. 2007, , 377, 77[Rao et al.(1998)Rao, Crutcher, Plambeck, & Wright]rao1998 Rao, R., Crutcher, R. M., Plambeck, R. L., & Wright, M. C. H. 1998, , 502, L75[Richer et al.(1993)Richer, Padman, Ward-Thompson, Hills, & Harris]richer1993 Richer, J. S., Padman, R., Ward-Thompson, D., Hills, R. E., & Harris, A. I. 1993, , 262, 839[Rumble et al.(2015)Rumble, Hatchell, Gutermuth, Kirk, Buckle, Beaulieu, Berry, Broekhoven-Fiene, Currie, Fich, Jenness, Johnstone, Mottram, Nutter, Pattle, Pineda, Quinn, Salji, Tisi, Walker-Smith, Francesco, Hogerheijde, Ward-Thompson, Allen, Cieza, Dunham, Harvey, Stapelfeldt, Bastien, Butner, Chen, Chrysostomou, Coude, Davis, Drabek-Maunder, Duarte-Cabral, Fiege, Friberg, Friesen, Fuller, Graves, Greaves, Gregson, Holland, Joncas, Kirk, Knee, Mairs, Marsh, Matthews, Moriarty-Schieven, Rawlings, Richer, Robertson, Rosolowsky, Sadavoy, Thomas, Tothill, Viti, White, Wilson, Wouterloot, Yates, & Zhu]rumble2015 Rumble, D., Hatchell, J., Gutermuth, R. A., et al. 2015, , 448, 1551[Sadavoy et al.(2013)Sadavoy, Di Francesco, Johnstone, Currie, Drabek, Hatchell, Nutter, André, Arzoumanian, Benedettini, Bernard, Duarte-Cabral, Fallscheer, Friesen, Greaves, Hennemann, Hill, Jenness, Könyves, Matthews, Mottram, Pezzuto, Roy, Rygl, Schneider-Bontemps, Spinoglio, Testi, Tothill, Ward-Thompson, White, & the JCMT and Herschel Gould Belt Survey Teams]sadavoy2013 Sadavoy, S. I., Di Francesco, J., Johnstone, D., et al. 2013, , 767, 126[Salji et al.(2015)Salji, Richer, Buckle, Hatchell, Kirk, Beaulieu, Berry, Broekhoven-Fiene, Currie, Fich, Jenness, Johnstone, Mottram, Nutter, Pattle, Pineda, Quinn, Tisi, Walker-Smith, Francesco, Hogerheijde, Ward-Thompson, Bastien, Butner, Chen, Chrysostomou, Coude, Davis, Drabek-Maunder, Duarte-Cabral, Fiege, Friberg, Friesen, Fuller, Graves, Greaves, Gregson, Holland, Joncas, Kirk, Knee, Mairs, Marsh, Matthews, Moriarty-Schieven, Rawlings, Robertson, Rosolowsky, Rumble, Sadavoy, Thomas, Tothill, Viti, White, Wilson, Wouterloot, Yates, & Zhu]salji2015 Salji, C. J., Richer, J. S., Buckle, J. V., et al. 2015, , 449, 1769[Schleuning(1998)]schleuning1998 Schleuning, D. A. 1998, , 493, 811[Segura-Cox et al.(2015)Segura-Cox, Looney, Stephens, Fernández-López, Kwon, Tobin, Li, & Crutcher]seguracox2015 Segura-Cox, D. M., Looney, L. W., Stephens, I. W., et al. 2015, , 798, L2[Seifried & Walch(2015)]seifried2015 Seifried, D., & Walch, S. 2015, , 452, 2410[Shibata & Matsumoto(1991)]shibata1991 Shibata, K., & Matsumoto, R. 1991, , 353, 633[Soler et al.(2013)Soler, Hennebelle, Martin, Miville-Deschênes, Netterfield, & Fissel]soler2013 Soler, J. D., Hennebelle, P., Martin, P. G., et al. 2013, , 774, 128[Sugitani et al.(2011)Sugitani, Nakamura, Watanabe, Tamura, Nishiyama, Nagayama, Kandori, Nagata, Sato, Gutermuth, Wilson, & Kawabe]sugitani2011 Sugitani, K., Nakamura, F., Watanabe, M., et al. 2011, , 734, 63[Tamura & Kwon(2015)]tamura2015 Tamura, M., & Kwon, J. 2015, Young Stellar Objects and their Environment, ed. L. Kolokolova, J. Hough, & A.-C. Levasseur-Regourd, 162[Tang et al.(2010)Tang, Ho, Koch, & Rao]tang2010 Tang, Y.-W., Ho, P. T. P., Koch, P. M., & Rao, R. 2010, , 717, 1262[Thaddeus et al.(1972)Thaddeus, Kutner, Penzias, Wilson, & Jefferts]thaddeus1972 Thaddeus, P., Kutner, M. L., Penzias, A. A., Wilson, R. W., & Jefferts, K. B. 1972, , 176, L73[Tomisaka(2015)]tomisaka2015 Tomisaka, K. 2015, , 807, 47[Troland & Crutcher(2008)]troland2008 Troland, T. H., & Crutcher, R. M. 2008, , 680, 457[Vaillancourt & Matthews(2012)]vaillancourt2012 Vaillancourt, J. E., & Matthews, B. C. 2012, , 201, 13[Vázquez-Semadeni et al.(2011)Vázquez-Semadeni, Banerjee, Gómez, Hennebelle, Duffin, & Klessen]vazquezsemadeni2011 Vázquez-Semadeni, E., Banerjee, R., Gómez, G. C., et al. 2011, , 414, 2511[Ward-Thompson et al.(2000)Ward-Thompson, Kirk, Crutcher, Greaves, Holland, & André]wardthompson2000 Ward-Thompson, D., Kirk, J. M., Crutcher, R. M., et al. 2000, , 537, L135[Ward-Thompson et al.(2009)Ward-Thompson, Sen, Kirk, & Nutter]wardthompson2009 Ward-Thompson, D., Sen, A. K., Kirk, J. M., & Nutter, D. 2009, , 398, 394[Ward-Thompson et al.(2007)Ward-Thompson, Di Francesco, Hatchell, Hogerheijde, Nutter, Bastien, Basu, Bonnell, Bowey, Brunt, Buckle, Butner, Cavanagh, Chrysostomou, Curtis, Davis, Dent, van Dishoeck, Edmunds, Fich, Fiege, Fissel, Friberg, Friesen, Frieswijk, Fuller, Gosling, Graves, Greaves, Helmich, Hills, Holland, Houde, Jayawardhana, Johnstone, Joncas, Kirk, Kirk, Knee, Matthews, Matthews, Matzner, Moriarty-Schieven, Naylor, Padman, Plume, Rawlings, Redman, Reid, Richer, Shipman, Simpson, Spaans, Stamatellos, Tsamis, Viti, Weferling, White, Whitworth, Wouterloot, Yates, & Zhu]wardthompson2007 Ward-Thompson, D., Di Francesco, J., Hatchell, J., et al. 2007, , 119, 855[Ward-Thompson et al.(2010)Ward-Thompson, Kirk, André, Saraceno, Didelon, Könyves, Schneider, Abergel, Baluteau, Bernard, Bontemps, Cambrésy, Cox, Di Francesco, di Giorgio, Griffin, Hargrave, Huang, Li, Martin, Men'shchikov, Minier, Molinari, Motte, Olofsson, Pezzuto, Russeil, Sauvage, Sibthorpe, Spinoglio, Testi, White, Wilson, Woodcraft, & Zavagno]wardthompson2010 Ward-Thompson, D., Kirk, J. M., André, P., et al. 2010, , 518, L92[Ward-Thompson et al.(2016)Ward-Thompson, Pattle, Kirk, Marsh, Buckle, Hatchell, Nutter, Griffin, Di Francesco, André, Beaulieu, Berry, Broekhoven-Fiene, Currie, Fich, Jenness, Johnstone, Kirk, Mottram, Pineda, Quinn, Sadavoy, Salji, Tisi, Walker-Smith, White, Hill, Könyves, Palmeirim, & Pezzuto]wardthompson2016 Ward-Thompson, D., Pattle, K., Kirk, J. M., et al. 2016, , 463, 1008[White et al.(2015)White, Drabek-Maunder, Rosolowsky, Ward-Thompson, Davis, Gregson, Hatchell, Etxaluze, Stickler, Buckle, Johnstone, Friesen, Sadavoy, Natt, Currie, Richer, Pattle, Spaans, Francesco, & Hogerheijde]white2015 White, G. J., Drabek-Maunder, E., Rosolowsky, E., et al. 2015, , 447, 1996§ APPENDIX A: REPEATABILITY OFOBSERVATIONS In this appendix we present a demonstration of the repeatability of observations of extended structure.These results are asubset of a larger study to be presented in thecommissioningpaper (Bastien et al. 2017), to which we refer the reader for furtherinformation.In order to test the repeatability of our observations, we performedjack-knife tests on our observations of OMC 1.We divided the datainto odd- and even-numbered scans, the half-vector maps produced from whichare shown in Figure <ref>. This division of scans is intentionally arbitrary, and is used to show thevariation that might be expected between any two samples, uncorrelated inany observational property. We see excellent consistency betweenthe two maps. § APPENDIX B: COMPARABILITY OFTO PREVIOUSOBSERVATIONSIn this appendix we compare themap of OMC 1 to previousobservations of OMC 1 made using the previous JCMT polarimeter, SCUPOL. There is no a priori reason to expect identical performance fromSCUPOL and ; the two instruments were/are mounted on differentcameras (SCUBA and SCUBA-2 respectively; c.f. ;), and take data in different modes(c.f. ; ;Bastien et al. 2017).However, the two instruments take data at thesame wavelength and resolution, and so the data taken ought to be directlycomparable.The SCUPOL observations of OMC 1 were published as part of the SCUPOLLegacy Catalogue <cit.>. Figure <ref> shows the SCUPOL data superposed on the POL-2 data. It can be seen that theand SCUPOL half-vectorsshow a very similar morphology, but that the polarizationfractions seen in the SCUPOL half-vectors are slightly larger than the half-vectors. We believe that this is due to the lower signal-to-noise ratio of the older SCUPOL data.The similarity in the polarization angles of theand SCUPOLhalf-vectors is shown quantitatively in Figure <ref>. Theand SCUPOL polarization angles are plotted at positionsmatched to within one JCMT beam (14.1 arcsec). The two half-vector sets showcorrelated polarization angles, and in fact theand SCUPOLpolarization angles are consistent with a 1:1 relationship.
http://arxiv.org/abs/1704.08552v1
{ "authors": [ "Derek Ward-Thompson", "Kate Pattle", "Pierre Bastien", "Ray S. Furuya", "Woojin Kwon", "Shih-Ping Lai", "Keping Qiu", "David Berry", "Minho Choi", "Simon Coudé", "James Di Francesco", "Thiem Hoang", "Erica Franzmann", "Per Friberg", "Sarah F. Graves", "Jane S. Greaves", "Martin Houde", "Doug Johnstone", "Jason M. Kirk", "Patrick M. Koch", "Jungmi Kwon", "Chang Won Lee", "Di Li", "Brenda C. Matthews", "Joseph C. Mottram", "Harriet Parsons", "Andy Pon", "Ramprasad Rao", "Mark Rawlings", "Hiroko Shinnaga", "Sarah Sadavoy", "Sven van Loo", "Yusuke Aso", "Do-Young Byun", "Eswariah Chakali", "Huei-Ru Chen", "Mike C. -Y. Chen", "Wen Ping Chen", "Tao-Chung Ching", "Jungyeon Cho", "Antonio Chrysostomou", "Eun Jung Chung", "Yasuo Doi", "Emily Drabek-Maunder", "Stewart P. S. Eyres", "Jason Fiege", "Rachel K. Friesen", "Gary Fuller", "Tim Gledhill", "Matt J. Griffin", "Qilao Gu", "Tetsuo Hasegawa", "Jennifer Hatchell", "Saeko S. Hayashi", "Wayne Holland", "Tsuyoshi Inoue", "Shu-ichiro Inutsuka", "Kazunari Iwasaki", "Il-Gyo Jeong", "Ji-hyun Kang", "Miju Kang", "Sung-ju Kang", "Koji S. Kawabata", "Francisca Kemper", "Gwanjeong Kim", "Jongsoo Kim", "Kee-Tae Kim", "Kyoung Hee Kim", "Mi-Ryang Kim", "Shinyoung Kim", "Kevin M. Lacaille", "Jeong-Eun Lee", "Sang-Sung Lee", "Dalei Li", "Hua-bai Li", "Hong-Li Liu", "Junhao Liu", "Sheng-Yuan Liu", "Tie Liu", "A-Ran Lyo", "Steve Mairs", "Masafumi Matsumura", "Gerald H. Moriarty-Schieven", "Fumitaka Nakamura", "Hiroyuki Nakanishi", "Nagayoshi Ohashi", "Takashi Onaka", "Nicolas Peretto", "Tae-Soo Pyo", "Lei Qian", "Brendan Retter", "John Richer", "Andrew Rigby", "Jean-François Robitaille", "Giorgio Savini", "Anna M. M. Scaife", "Archana Soam", "Motohide Tamura", "Ya-Wen Tang", "Kohji Tomisaka", "Hongchi Wang", "Jia-Wei Wang", "Anthony P. Whitworth", "Hsi-Wei Yen", "Hyunju Yoo", "Jinghua Yuan", "Chuan-Peng Zhang", "Guoyin Zhang", "Jianjun Zhou", "Lei Zhu", "Philippe André", "C. Darren Dowell", "Sam Falle", "Yusuke Tsukamoto" ], "categories": [ "astro-ph.GA" ], "primary_category": "astro-ph.GA", "published": "20170427131846", "title": "First results from BISTRO -- a SCUBA-2 polarimeter survey of the Gould Belt" }
Giant oscillations of the current in a dirty 2D electron system flowing perpendicular to a lateral barrier under magnetic field. A. M. Kadigrobov December 30, 2023 ================================================================================================================================ We introduce an attention-based Bi-LSTM for Chinese implicit discourse relations and demonstrate that modeling argument pairs as a joint sequence can outperform word order-agnostic approaches.Our model benefits from a partial sampling scheme and is conceptually simple, yet achieves state-of-the-art performance on the Chinese Discourse Treebank. We also visualize its attention activity to illustrate the model's ability to selectively focus on the relevant parts of an input sequence.§ INTRODUCTION ^∗Both first authors contributed equally to this work.True text understanding is one of the key goals in Natural Language Processing and requires capabilities beyond the lexical semantics of individual words or phrases. Natural language descriptions are typically driven by an inter-sentential coherent structure, exhibiting specific discourse properties, which in turn contribute significantly to the global meaning of a text. Automatically detecting how meaning units are organized benefits practical downstream applications, such as question answering <cit.>, recognizing textual entailment <cit.>,sentiment analysis <cit.>, or text summarization <cit.>. Various formalisms in terms of semantic coherence frameworks have been proposed to account for these contextual assumptions <cit.>. The annotation schemata of the Penn Discourse Treebank <cit.> and the Chinese Discourse Treebank <cit.>, for instance, define discourse units as syntactically motivated character spans in the text, augmented with relations pointing from the second argument (Arg2, prototypically, a discourse unit associated with an explicit discourse marker) to its antecedent, i.e., the discourse unit Arg1. Relations are labeled with a relation type (its sense) and the associated discourse marker.Both, PDTB and CDTB, distinguish explicit from implicit relations depending on the presence of such a marker (e.g., because/UTF8gbsn 因 ).[The set of relation types and senses is completed by alternative lexicalizations (AltLex/discourse marker rephrased), and entity relations (EntRel/anaphoric coherence). ]Sense classification for implicit relations is by far more challenging because the argument pairs lack the marker as an important feature. Consider, for instance, the following example from the CDTB as implicit Conjunction: Arg1: UTF8gbsn 会谈 就 一些 原则 和 具体 问题 进行 了 深入 讨论 , 达成 了 一些 谅解 In the talks, they discussed some principles and specific questions in depth, and reached some understandings Arg2: UTF8gbsn 双方 一致 认为 会谈 具有 积极 成果Both sides agree that the talks have positive results Motivation: Previous work on implicit sense labeling is heavily feature-rich and requires domain-specific, semantic lexicons <cit.>. Only recently, resource-lean architectures have been proposed. These promising neural methods attempt to infer latent representations appropriate for implicit relation classification <cit.>. So far, unfortunately, these models have been evaluated only on four top-level senses—sometimes even with inconsistent evaluation setups.[E.g., four binary classifiers vs. four-way classification.] Furthermore, most systems have initially been designed for the English PDTB and involve complex, task-specific architectures <cit.>, while discourse modeling techniques for Chinese have received very little attention in the literature and are still seriously underrepresented in terms of publicly available systems. What is more, over 80% of all words in Chinese discourse relations are implicit—compared to only 52% in English <cit.>. Recently, in the context of the CoNLL 2016 shared task <cit.>, a first independent evaluation platform beyond class level has been established. Surprisingly, the best performing neural architectures to date are standard feedforward networks, cf. K16-2004,K16-2005,K16-2010. Even though these specific models completely ignore word order within arguments, such feedforward architectures have been claimed by DBLP:journals/corr/RutherfordDX16 to generally outperform any thoroughly-tuned recurrent architecture.Our Contribution: In this work, we release the first attention-based recurrent neural sense classifier, specifically developed for Chinese implicit discourse relations.Inspired by DBLP:conf/acl/ZhouSTQLHX16, our system is a practical adaptation of the recent advances in relation modeling extended by a novel sampling scheme. Contrary to previous assertions by DBLP:journals/corr/RutherfordDX16, our model demonstrates superior performance over traditional bag-of-words approaches with feedfoward networks by treating discourse arguments as a joint sequence. We evaluate our method within an independent framework and show that it performs very well beyond standard class-level predictions, achieving state-of-the-art accuracy on the CDTB test set.We illustrate how our model's attention mechanism provides means to highlight those parts of an input sequence that are relevant for the classification decision, and thus, it may enable a better understanding of the implicit discourse parsing problem. Our proposed network architecture is flexible and largely language-independent as it operates only on word embeddings. It stands out due to its structural simplicity and builds a solid ground for further development towards other textual domains.§ APPROACH We propose the use of an attention-based bidirectional Long Short-Term Memory <cit.> network to predict senses of discourse relations. The model draws upon previous work on LSTM, in particular its bidirectional mode of operation <cit.>, attention mechanisms for recurrent models <cit.>, and the combined use of these techniques for entity relation recognition in annotated sequences <cit.>.More specifically, our model is a flexible recurrent neural network with capabilities to sequentially inspect tokens and to highlight which parts of the input sequence are most informative for the discourse relation recognition task, using the weighting provided by the attention mechanism.Furthermore, the model benefits from a novel sampling scheme for arguments, as elaborated below. The system is learned in an end-to-end manner and consists of multiple layers, which are illustrated in Figure <ref>.First, token sequences are taken as input and special markers (ARG1, /ARG1, etc.) are inserted into the corresponding positions to inform the model on the start and end points of argument spans. This way, we can ensure a general flexibility in modeling discourse units and could easily extend them with additional context, for instance. In our experiments on implicit arguments, only the tokens in the respective spans are considered. Note that, unlike previous works, our approach models Arg1-Arg2 pairs as a joint sequence and does not first compute intermediate representations of arguments separately.Second, an input layer encodes tokens using one-hot vector representations (t_i for tokens at positions i∈[1,k]), and a subsequent embedding layer provides a dense representation (e_i) to serve as input for the recurrent layers. The embedding layer is initialized using pre-trained word vectors, in our case 300-dimensional Chinese Gigaword vectors <cit.>.[<http://www.cs.brandeis.edu/ clp/conll16st/dataset.html>] These embeddings are further tuned as the network is trained towards the prediction task. Embeddings for unknown tokens, e.g., markers, are trained by backpropagation only. Note that, tokens, markers and the pre-trained vectors represent the only source of information for the prediction task.For the recurrent setup, we use a layer of LSTM networks in a bidirectional manner, in order to better capture dependencies between parts of the input sequence by inspection of both left and right-hand-side contexts at each time step. The LSTM holds a state representation as a continuous vector passed to the subsequent time step, and it is capable of modeling long-range dependencies due to its gated memory. The forward (A') and backward (A”) LSTMs traverse the sequence e_i, producing sequences of vectors h'_i and h”_i respectively, which are then summed together (indicated by ⊕ in Figure <ref>).The resulting sequence of vectors h_i is reduced into a single vector and fed to the final softmax output layer in order to classify the sense label y of the discourse relation. This vector may be obtained either as the final vector h produced by an LSTM, or through pooling of all h_i, or by using attention, i.e., as a weighted sum over h_i. While the model may be somewhat more difficult to optimize using attention, it provides the added benefit of interpretability, as the weights highlight to what extent the classifier considers the LSTM state vectors at each token during modeling. This is particularly interesting for discourse parsing, as most previous approaches have provided little support for pinpointing the driving features in each argument span. Finally, the attention layer contains the trainable vector w (of the same dimensionality as vectors h_i) which is used to dynamically produce a weight vector α over time steps i by: α = softmax(w^Ttanh(H))where H is a matrix consisting of vectors h_i. The output layer r is the weighted sum of vectors in H:r = Hα^T Partial Argument Sampling: For the purpose of enlarging the instance space of training items in the CDTB, and thus, in order to improve the predictive performance of the model, we propose a novel partial sampling scheme of arguments, whereby the model is trained and validated on sequences containing both arguments, as well as single arguments. A data point (a_1,a_2,y), with a_i being the token sequence of argument i, is expanded into {(a_1,a_2,y),(a_1,a_2,y),(a_1,y),(a_2,y)}. We duplicate bi-argument samples (a_1,a_2,y) (in training and development data only) to balance their frequencies against single-argument samples.Two lines of motivation support the inclusion of single argument training examples, grounded in linguistics and machine learning, respectively. First, it has been shown that single arguments in isolation can evoke a strong expectation towards a certain implicit discourse relation, cf. asr2015uniform and, in particular, rohdehorton in their psycholinguistic study on implicit causality verbs.Second, the procedure may encourage the model to learn better representations of individual argument spans in support ofmodeling of arguments in composition, cf. lecun2015deep. Due to these aspects, we believe this data augmentation technique to be effective in reinforcing the overall robustness of our model. Implementational Details:We train the model using fixed-length sequences of 256 tokens with zero padding at the beginning of shorter sequences and truncate longer ones.Each LSTM has a vector dimensionality of 300, matching the embedding size. The model is regularized by 0.5 dropout rate between the layers and weight decay (2.5e^-6) on the LSTM inputs.We employ Adam optimization <cit.> using the cross-entropy loss function with mini batch size of 80.[The model is implemented in Keras <https://keras.io/>.] § EVALUATIONWe evaluate our recurrent model on the CoNLL 2016 shared task data[<http://www.cs.brandeis.edu/ clp/conll16st/>] which include the official training, development and test sets of the CDTB; cf. Table <ref> for an overview of the implicit sense distribution.[Note that, in the CDTB, implicit relations appear almost three times more often than explicit relations. Out of these, 65% appear within the same sentence. Finally, 25 relations in the training set have two labels.]In accordance with previous setups <cit.>, we treat entity relations (EntRel) as implicit and exclude AltLex relations. In the evaluation, we focus on the sense-only track, the subtask for which gold arguments are provided and a system is supposed to label a given argument pair with the correct sense.The results are shown in Table <ref>. With our proposed architecture it is possible to correctly label 257/352 (73.01%) of implicit relations on the test set, outperforming the best feedforward system of K16-2004 and all other word order-agnostic approaches. Development and test set performances suggest the robustness of our approach and its ability to generalize to unseen data. Ablation Study:We perform an ablation study to quantitatively assess the contribution of two of the characteristic aspects of our model. First, we compare the use of the attention mechanism against the simpler alternative of feeding the final LSTM hidden vectors (h'_k and h”_1) directly to the output layer. When attention is turned off, this yields an absolute decrease in performance of 2.70% on the test set, which is substantial and significant according to a Welch two-sample t-test (p < .001). Second, we independently compare the use of the partial sampling scheme against training on the standard argument pairs in the CDTB. Here, the absence of the partial sampling scheme yields an absolute decrease in accuracy of 5.74% (p < .001), which demonstrates its importance for achieving competitive performance on the task. Performance on the PDTB: As a side experiment, we investigate the model's language independence by applying it to the implicit argument pairs of the English PDTB. Due to computational time constraints we do not optimize hyperparameters, but instead train the model using identical settings as for Chinese, which is expected to lead to suboptimal performance on the evaluation data. Nevertheless, we measure 27.09% accuracy on the PDTB test set (surpassing the majority class baseline of 22.01%), which shows that the model has potential to generalize across implicit discourse relations in a different language.Visualizing Attention Weights: Finally, in Figure <ref>, we illustrate the learned attention weights which pinpoint important subcomponents within a given implicit discourse relation. For the implicitrelation the weights indicate a peak on the transition between the argument boundary, establishing a connection between the semantically related terms understandings–agree. Most EntRels show an opposite trend: here second arguments exhibit larger intensities than Arg1, as most entity relations follow the characteristic writing style of newspapers by adding additional information by reference to the same entity.§ SUMMARY & OUTLOOK In this work, we have presented the first attention-based recurrent neural sense labeler specifically developed for Chinese implicit discourse relations. Its ability to model discourse units sequentially and jointly has been shown to be highly beneficial, both in terms of state-of-the-art performance on the CDTB (outperforming word order-agnostic feedforward approaches), and also in terms of insightful observations into the inner workings of the model through its attention mechanism. The architecture is structurally simple, benefits from partial argument sampling, and can be easily adapted to similar relation recognition tasks. In future work, we intend to extend our approach to different languages and domains, e.g., to the recent data sets on narrative story understanding or question answering <cit.>. We believe that recurrent modeling of implicit discourse information can be a driving force in successfully handling such complex semantic processing tasks.[The code involved in this study is publicly available at <http://www.acoli.informatik.uni-frankfurt.de/resources/>.] § ACKNOWLEDGMENTS The authors would like to thank Ayah Zirikly, Philip Schulz and Wei Ding for their very helpful suggestions on an early draft version of the paper, and also thank the anonymous reviewers for their valuable feedback and insightful comments. We are grateful to Farrokh Mehryary for technical support with the attention layer implementation. Computational resources were provided by CSC – IT Centre for Science, Finland, and Arcada University of Applied Sciences, Helsinki, Finland. Our research at Goethe University Frankfurt was supported by the project `Linked Open Dictionaries (LiODi, 2015-2020)', funded by the German Ministry for Education and Research (BMBF).acl_natbib
http://arxiv.org/abs/1704.08092v1
{ "authors": [ "Samuel Rönnqvist", "Niko Schenk", "Christian Chiarcos" ], "categories": [ "cs.CL", "cs.AI", "cs.LG", "cs.NE" ], "primary_category": "cs.CL", "published": "20170426131012", "title": "A Recurrent Neural Model with Attention for the Recognition of Chinese Implicit Discourse Relations" }
Ranking in evolving complex networks Ming-Yang Zhou December 30, 2023 ==================================== To properly describe heating in weakly collisional turbulent plasmas such as the solar wind, inter-particle collisions should be takeninto account. Collisions can convert ordered energy into heat by means of irreversible relaxation towards the thermal equilibrium.Recently, Pezzi et al. (Phys. Rev. Lett., vol.116, 2016, p. 145001) showed that the plasma collisionality is enhanced by thepresence of fine structures in velocity space. Here, the analysis is extended by directly comparing the effects of the fully nonlinearLandau operator and a linearized Landau operator. By focusing on the relaxation towards the equilibrium of an out of equilibriumdistribution function in a homogeneous force-free plasma, here it is pointed out that it is significant to retain nonlinearities in thecollisional operator to quantify the importance of collisional effects. Although the presence of several characteristic times associatedwith the dissipation of different phase space structures is recovered in both the cases of the nonlinear and the linearized operators, theinfluence of these times is different in the two cases. In the linearized operator case, the recovered characteristic times aresystematically larger than in the fully nonlinear operator case, this suggesting that fine velocity structures are dissipated slower ifnonlinearities are neglected in the collisional operator.Authors should not enter PACS codes directly on the manuscript, as these must be chosen during the online submission process andwill then be added during the typesetting process (see http://www.aip.org/pacs/ for the full list of PACS codes) § INTRODUCTIONSince the beginning of the last Century, many theoretical efforts have been performed to model natural and laboratory plasmas. One of thefirst attempts to describe the interplanetary medium and its interaction with the planetary magnetospheres was conducted by S. Chapman andV.C.A. Ferraro <cit.>, widely considered the fathers of the Magnetohydrodynamics (MHD) theory. Their main intuition wasto treat plasmas, approximated by neutral conducting fluids, as self-consistent media. One of the basic assumptions of this framework isthat inter-particle collisions are sufficiently strong to maintain a local thermodynamical equilibrium, e.g. the particle velocitydistribution function (VDF) is close to the equilibrium Maxwellian shape. This approach is still widely adopted to analyze plasma dynamicsat large scales and many models have been developed to study the features of the MHD turbulence <cit.>. One of the most studied natural plasmas is the solar wind, which is the high temperature, low density, supersonic flowemitted from the solar atmosphere. The solar wind is a strongly turbulent flow: the typical Reynolds number is about Re≈ 10^5<cit.>; fluctuations are broadband and often exhibit a power-law spectra; several indicators of intermittency areroutinely observed <cit.>. Despite the solar wind is usually approached in terms of MHD turbulence, spacecraft in-situ measurements reveal much complex features, which go beyond the fluid MHD approach. Once the energy is transferred by turbulencetowards smaller scales close to the ion inertial scales, kinetic physics signatures are often observed <cit.>. The particle VDF often displays a distorted out-of-equilibrium shape characterized by the presence ofnon-Maxwellian features such as temperature anisotropies, particle beams along the local magnetic field direction, rings-like structures <cit.>. The principal models to take into account kinetic effects are based on the assumptionthat the plasma is collisionless, e.g. collisions are far too weak to produce any significant effect on the plasma dynamics<cit.>. We would point out that, in order to comprehend the heating mechanisms of the solar wind, collisional effects should be considered. Indeedcollisions are the unique mechanism able to produce irreversible heating from a thermodynamic point of view. Furthermore, to show thatcollisions can be neglected, the shape of the particle VDF is usually assumed to be close to the equilibrium Maxwellian<cit.>. This approximation may result problematic for weakly-collisional turbulent plasmas, where kineticphysics strongly distorts the particle VDFs and produces fine structure in velocity space. Collisional effects, which explicitly depend ongradients in velocity space, may be enhanced by the presence of these small scale structures in velocity space <cit.> (here afterPaper I). Indeed, in Paper I we showed that the the collisional thermalization of fine velocity structures occurs on much smallertimes with respect to the usual Spitzer-Harm time <cit.> ν_SH^-1 (being ν_SH≃ 8 × (0.714 π n e^4lnΛ)/(m^0.5 (3 k_B T)^3/2), where n, e, lnΛ, m, k_B and T are respectively the particle number density, theunit electric charge, the Coulombian logarithm, the Boltzmann constant and the plasma temperature). The smallest characteristic times maybe comparable with the characteristic times of other physical processes. Therefore, collisions could play a significant role into thedissipation of strong gradients in the VDF, thus contributing to the plasma heating.In this paper we focus on the importance of retaining nonlinearities in the collisional operators. In particular, by means of numericalsimulations of a homogeneous force-free plasma, we describe the collisional relaxation towards the equilibrium of an initial VDF whichexhibits strong non-Maxwellian signatures. Collisions among particles of the same species are here modeled through the fully nonlinearLandau operator and a linearized Landau operator. A detailed comparison concerning the effects of the two operator indicates that retainingnonlinearities in the collisional integral is crucial to give the proper importance to collisional effects. Indeed, both operators are ableto highlight the presence of several characteristic times associated with the dissipation of fine velocity structures. However, themagnitude of these times is different if nonlinearities are neglected: in the linearized operator case, the characteristic times aresystematically larger compared to the case of the fully nonlinear operator. This indicates that, when nonlinearities are not taken intoaccount in the mathematical form of the collisional operator, fine velocity structures are dissipated much slower. Results here describedsupport the idea that to properly quantify the enhancement of collisional effects and, hence, to correctly compare collisional times withother dynamical times, it is important to adopt nonlinear collisional operators. We would remark that, since the Landau operator is demanding from a computational perspective, self-consistent high-resolutionsimulations cannot be currently afforded and we are forced to restrict to the case of a force-free homogeneous plasma, where both force andadvection terms have been neglected.This approximation represents a caveat of the work here presented and future studies will be devotedto the generalization of the results here shown to the self-consistent case. The paper is organized as follows: in Sec. <ref> the solar wind heating problem is revisited in order to address and motivateour work. Then, in Sec. <ref> we give a brief description of the numerical codes and the adopted methods of analysis. Numericalresults of our simulations are also reported and discussed in detail. Finally, in Sec. <ref> we conclude and summarize. § SOLAR WIND HEATING: A HUGE PROBLEMAs introduced above, the solar wind is a weakly collisional, strongly turbulent medium <cit.>. Several observations indicate thatthe solar wind is incessantly heated during its travel through the heliosphere: the temperature decay along the radial distance is indeedmuch slower than the decay expected within adiabatic models of the wind expansion <cit.>.Therefore, some local heating mechanisms must play a significant role to supply the energy needed to heat the plasma. Numerous scenarioshave been proposed to understand the plasma heating and a long-standing debate about which processes are preferred is still waiting for aclear and definitive answer [See <cit.> and references therein]. Among these processes, it is widely known that the turbulenceefficiently contributes to the local heating of solar wind <cit.>, since it can efficiently transfer asignificant amount of energy towards smaller scales, where dissipative mechanisms are at work. In fact, in a turbulent flow, much moreenergy is transferred towards smaller scales with respect to a laminar flow: the ratio between the energy transfer flux due to turbulence ata certain scale with respect to the heating production due to dissipation at the same scale is proportional to the Reynolds number Re,thus indicating that the energy transfer towards smaller scales gets more efficient as the flow becomes more turbulent. In the simple neutral fluid scenario, the cascade is arrested once that the dissipative scale is reached <cit.>. On the otherhand, the cascade evolves in a more complex way in a plasma: the presence of other processes (for example dispersion and kinetic effects)strongly modify the cascade before reaching the dissipative scale. A relatively wide agreement has been achieved about the importance ofturbulence for transferring energy towards smaller scales. Instead, many scenarios have been proposed to explain the transition from theinertial range towards the kinetic scales and the nature of dissipative processes. These scenarios are often based on the “collisionless”assumption, that is justified by the fact that the Spitzer-Harm collisional time <cit.> is much larger than other dynamicaltimes. We would remark that two important caveats should be considered. First, any mechanism which does not consider collisions is not able to describe the last part of the heating process, namely the heatproduction due to the irreversible dissipation of phase space structures and the approach towards the thermal equilibrium. For example,several mechanisms (e.g. nonlinear waves) can indeed increase the particle temperature, evaluated as the second order moment of the particledistribution function, by producing non-Maxwellian features as beams of trapped particles. However, this temperature growth due tothe beam production does not represent a temperature growth in the thermodynamic sense, because the beam presence makes the system out ofequilibrium. The particles beam can be instead interpreted as a form of free energy stored into the VDF. This energy is not in generalconverted into heat by means of irreversible processes but it can be also transformed in other forms of ordered energy (e.g. throughmicro-instabilities) <cit.>. Collisions are the unique mechanism able to degrade this information into heat by approaching thethermal equilibrium, thus producing heating in the general thermodynamic and irreversible sense. Second, the evaluation of the Spitzer-Harm collisional time strictly assumes that the VDF shape is close to the equilibrium Maxwellian.This assumption may not be held in the solar wind <cit.>, where VDFs shape is strongly perturbed by kineticturbulence. In this direction, by focusing on the collisional relaxation in a homogeneous force-free plasma where collisions are modeledwith the fully nonlinear Landau operator <cit.>, we recently showed that fine velocity structures are dissipated much faster thanglobal non-thermal features such as temperature anisotropy (Paper I). The entropy production due to the relaxation of the VDF towards theequilibrium occurs on several characteristic times. These characteristic times are associated with the dissipation of particular velocityspace structures and can be much smaller than the Spitzer-Harm time <cit.>, this indicating that collisions could effectivelycompete with other processes (e.g. micro-instabilities). In this perspective, high-resolution measurements of the particle VDF in the solarwind are crucial for a proper description of the heating problem <cit.>.In principle the combination of the turbulent nature of the solar wind with its weakly collisionality may constitute a new scenario todescribe the solar wind heating. In fact, turbulence is able to transfer energy towards smaller scales. Then, when kinetic scales arereached, since the plasma is weakly collisional, the VDF becomes strongly distorted and exhibits non-Maxwellian features, such as beams,anisotropies, ring-like structures <cit.>. The presence of strong gradients in velocityspace tends to naturally enhance the effect of collisions, which - ultimately - may become efficient for dissipating these structures andfor producing heat. Based on these last considerations, numerous studies have been recently conducted in order to take into account collisional effects in aweakly collisional plasma such as the solar wind <cit.>, where collisions are usually introduced by means of a collisional operator at the right hand-side ofthe Vlasov equation. The choice of the proper collisional operator remains an open problem. Several derivations from firstprinciples (e.g. Liouville equation) indicate that the most general collisional operators for plasmas are the Lenard-Balescu operator<cit.> or the Landau operator <cit.>. Both operators are nonlinear “Fokker-Planck”-likeoperators which involve velocity space derivatives and three-dimensional integrals. The Landau operator introduces an upper cut-off of theintegrals at the Debye length to avoid the divergence for large impact parameters, while the Balescu-Lenard operator solves this divergencein a more consistent way through the dispersion equation. Therefore, the Balescu-Lenard operator is more general compared to the Landauoperator from this point of view. However, both operators are derived by assuming that the plasma is not extremely far from the thermalequilibrium. Hence, both operators could lack the description of inter-particle collisions in a strongly turbulent system. The numericalapproach of operators is also much more difficult for the Balescu-Lenard operator with respect to the Landau operator, because it involvesthe evaluation of dispersion function. Finally, we would also point out that, as far as we know, an explicit derivation of the Boltzmannoperator for plasmas starting by the Liouville equation does not exist <cit.>. Despite the adoption of the Boltzmann operatorfor describing collisional effects in plasmas is questionable from a theoretical perspective, it still represents a valid options sinceBoltzmann and Fokker-Planck like operators such as the Landau one are intrinsically similar <cit.>.The computational cost to evaluate both the Landau and Balescu-Lenard operators numerically is huge: for N gridpoints along eachdirection of the 3D–3V numerical phase space (3D in physical space and 3D in velocity space), the computation for the Landauoperator would require about N^9 operations at each time step.In fact, for each point of the six-dimensional grid, athree-dimensional integral must be computed. To avoid this numerical complexity, several simplified operators have been proposed. We maydistinguish these simpler operators in two classes. The first type of operators, as the Bathanar-Gross-Krook <cit.> and theDougherty operators <cit.>, models collisions in the realistic three-dimensional velocity space byadopting a simpler structure of the operator. The second class of collisional operators works instead in a reduced, one-dimensional velocityspace assuming that the dynamics mainly occur in one direction. Although this approach is quite “unphysical” (collisions naturally act inthree dimensions), these operators can satisfactorily model collisions in laboratory plasmas devices, such as the Penning-Malmberg traps,where the plasma is confined into a long and thin column and the dynamics occurs mainly along a single direction <cit.>.§ NUMERICAL APPROACH AND SIMULATION RESULTSAs described above, to highlight the importance of nonlinearities present in the collisional operator, here we compare the effects of thefully Landau operator with a model of linearized Landau operator,obtained by simplifying the structures of the Landau operatorcoefficients. We restrict to the case of a force-free homogeneous plasma and we just model collisions between particles of the samespecies. Our interest is in fact to understand how collisional effects change when the mathematical kernel of the collisional operator ismodified. Based on these assumptions, we numerically integrate the following dimensionless collisional evolution equations for the particledistribution function f(,t): f(,t)/ t=π(3/2)^3/2/ v_i∫ d^3v'U_ij (𝐮) [ f(',t) f(,t)/ v_j - f(,t)f(',t)/ v'_j],f(,t)/ t=π(3/2)^3/2/ v_i∫ d^3v'U_ij (𝐮) [ f_0(') f(,t)/ v_j - f(,t)f_0(')/ v'_j].being f normalized such that ∫ d^3v f() =n=1 and U_ij(𝐮)U_ij(𝐮)= δ_iju^2 - u_i u_j/u^3 , where 𝐮= -', u=|𝐮| and the Einstein notation is introduced. In Eqs. (<ref>–<ref>),and from now on, time is scaled to the inverse Spitzer-Harm frequency ν_SH^-1 <cit.> and velocity to the particle thermalspeed v_th. Details about the numerical solution of Eqs. (<ref>–<ref>) can be found in Refs. <cit.>. In Eq. (<ref>), f_0() is the three-dimensional Maxwellian distribution function associated with the initialcondition of our simulations f(,t=0) and built in such a way that density, bulk velocity and temperature of the two distributionsf(,t=0) and f_0() are equal. The two equations clearly differ because Eq. (<ref>) is a linearized model of Eq.(<ref>). The operator described in Eq. (<ref>) has been in fact obtained by linearizing the coefficients of theLandau operator. Although this linear operator does not represent the exact linearization of the Landau operator, the procedure hereadopted for linearizing the operator (i.e. simplifying only the Fokker-Planck coefficients) is commonly adopted. In the following, we willnote that the simulations performed with the linearized operator thermalize to the same final VDF and produce also the same total entropygrowth of the nonlinear simulations. This suggests that the term which is not included in the form collisional operator (whoseFokker-Planck coefficientsdepend on (f-f_0)(')) is not extremely relevant in the global thermalization of the system. Thisapproximation corresponds to retain the gradients related to the out-of-equilibrium structures but to neglect their contribute to theintegral in the ' space. In other words, here we locally consider gradients but we neglect their contribute to the global Fokker-Planckcoefficients.When simulations are completed, we perform the following multi-exponential fit <cit.> of the entropy growth Δ S topoint out the presence of several characteristic times:Δ S (t) = ∑_i=1^KΔ S_i ( 1 - e^-t/τ_i),τ_i being the i–th characteristic time, Δ S_i the growth of entropy related to the characteristic time τ_i and K isevaluated through a recursive procedure. This procedure has been already adopted in Paper I to highlight the importance of fine velocitystructures in the entropy growth. In the following subsections, we report and describe the results of the simulations performed withtwo different initial distribution functions, already adopted in Paper I. The first initial condition concerns the presence ofnon-Maxwellian signatures due to a strongly nonlinear wave - an Electron Acoustic Wave (EAW) <cit.> - in the core of the distribution function. The EA waves here excited are quite differentfrom another type of electron acoustic fluctuations which occur in a plasma composed by two components at different temperature<cit.> and can be also observed in the Earth's magnetosphere <cit.>. The EAWs here excited are undamped waveswhose phase speed is close to the thermal speed. It has been shown that, in the usual theory of the equilibrium Maxwellian plasma, thesewaves are strongly damped; while they can survive if the distribution function is locally modified (and exhibits a flat region) around thewave phase velocity. To generate the nonlinearity in the distribution function and let these waves survive, external drivers are usuallyadopted to force the plasma. EAWs are also characterized by the presence of phase space Bernstein-Green-Kruskal (BGK) structures<cit.> in the core of the electron distribution function, associated with trapped particle populations. The second initialdistribution is instead a typical VDF recovered in hybrid Vlasov-Maxwell simulations of solar wind decaying turbulence <cit.>. The two simulations from which we selected our initial VDF are quite different. Indeed, in the first case, theout-of-equilibrium structures present in the initial VDF are due to the wave-particle interaction with the EAW, which is an almostmonochromatic (few excited wavenumbers), electrostatic wave. On the other hand, in the second case, the initial distribution function hasbeen strongly distorted due to the presence of an electromagnetic, turbulent cascade. §.§ First case study: wave-particle interactions and collisionsThe first initial condition here adopted, which is a three-dimensional VDF that evolves according to Eqs. (<ref>–<ref>)in the three-dimensional velocity space, has been designed as follows. We separately performed a 1D–1V Vlasov-Poisson simulation of aelectrostatic plasma composed by kinetic electrons and motionless protons whose resolution, in the z-v_z phase space domain, is N_z=256,N_v_z=1601. In order to excite a large amplitude EAW, we forced the system with an external sinusoidal electric field, which has beenadiabatically turned on and off to properly trigger the wave.Fig. <ref>(a) reports the power spectral density of the electricenergy E_E(k_z) as a function of the wavenumber k_z, evaluated at the final time instant of the Vlasov-Poisson simulation (where the EAWis fully developed). Few wavenumbers are significantly excited and the EAW is almost monochromatic. The features of the electricfluctuations spectrum are reflected into the shape of the distribution function, which is locally distorted around the phase speed andpresent a clear BGK hole, as reported in Fig. <ref>(b). Since the gridsize in velocity space is quite small in the currentsimulation, relatively small velocity scales are dynamically generated during the simulation by wave-particle interaction.Then, we selected the spatial point z_0 in the numerical domain [red vertical line in Fig. <ref>(b)], where this BGK-like phasespace structure displays its maximum velocity width, and we considered the velocity profile f̂_e(v_z)=f_e(z_0,v_z), whose shape as afunction of v_z is reported in Fig. <ref>(c). f̂_e is highly distorted due to nonlinear wave-particle interactions andexhibits sharp velocity gradients (bumps, holes, spikes around the resonant speed). Finally, by evaluating the density n_e, the bulk speedV_e and the temperature T_e of f̂_e, we built up the three-dimensional VDF f(v_x,v_y,v_z)=f_M(v_x)f_M(v_y)f̂_e(v_z),which represents our initial condition, being f_M the one-dimensional Maxwellian associated with f̂_e. We remark that this VDF doesnot exhibit any temperature anisotropy but it still exhibits strong non-Maxwellian deformations along v_z, due to the presence of trappedparticles, which make the system far from thermal equilibrium. The three-dimensional velocity domain is here discretized byN_v_x=N_v_y=51 and N_v_z=1601 gridpoints in the region v_i=[-v_max,v_max], being v_max=6v_th and i=x,y,z, whileboundary conditions assume that the distribution function is set equal to zero for |v_j|>v_max.Since no temperature anisotropies are present, the evolutions of the total temperature and of the temperatures along each direction aretrivial, the total temperature is preserved. On the other hand, the evolution of the entropy variation Δ S=S(t)-S(0) (S=-∫flnf d^3v) gives information about the approach towards equilibrium. The time history of Δ S obtained with the nonlinear Landauoperator (black) and with the linearized Landau operator (red) is showed in Fig. <ref>. Since the initial condition and theequilibrium Maxwellian reached under the effect of collisions is the same for both operators, the total entropy growth Δ S is thesame in the two cases. In other words, the free energy contained in the out-of-equilibrium structures in the initial VDF produces the sameentropy growth in absolute terms but the growth occurs on different time scales in the two cases. Indeed, in the nonlinear operator case theentropy grows much faster (1÷2 ν_SH^-1) compared to the linearized operator case (4÷ 5 ν_SH^-1).To quantify the presence of several characteristic times, we perform the multi exponential fit of Eq. (<ref>) on the entropy growthcurves reported in Fig. <ref>. The analysis of the growth recovered in the fully nonlinear Landau operator indicates that threedifferent characteristic times are recovered in the entropy growth:* τ^nl_1 = 3.5 · 10^-3 ν_SH^-1→Δ S^nl_1 / Δ S_tot = 13 % * τ^nl_2 = 1.3 · 10^-1 ν_SH^-1→Δ S^nl_2 / Δ S_tot = 42 % * τ^nl_3 = 4.9 · 10^-1 ν_SH^-1→Δ S^nl_3 / Δ S_tot = 40 %As discussed in Paper I, the presence of several characteristic times is associated with the dissipation of different velocity spacestructures. Fig. <ref> reports f(v_x=v_y=0,v_z) as a function of v_z at the time instants t=T_nl,1=τ^nl_1 (a),t=T_nl,2=τ^nl_1+τ^nl_2 (b), t=T_nl,3=τ^nl_1+τ^nl_2+τ^nl_3 (c) and t=t_fin (d), These time instants aredisplayed in Fig. <ref> with blue diamonds. After the time t=T_nl,1=τ^nl_1 (a), steep spikes visible in Fig.<ref>(b) are almost completely smoothed out; then, at time t=T_nl,2=τ^nl_1+τ^nl_2 (b), the remaining plateau regionis significantly rounded off, only a gentle shoulder being left; finally, after a time t=T_nl,3=τ^nl_1+τ^nl_2+τ^nl_3 (c),the collisional relaxation to equilibrium is completed for the most part. A small percentage ≃ 5% of the total entropy growth isfinally recovered for larger times and corresponds to the final approach to the equilibrium Maxwellian (d).By performing the same analysis for the linearized Landau operator case, three characteristic times are also recovered:* τ^lin_1 = 1.1 · 10^-2 ν_SH^-1→Δ S^lin_1 / Δ S_tot = 11 % * τ^lin_2 = 2.7 · 10^-1 ν_SH^-1→Δ S^lin_2 / Δ S_tot = 23 % * τ^lin_3 = 1.5ν_SH^-1→Δ S^lin_3 / Δ S_tot = 63 %These characteristic times are systematically larger than the times recovered in the nonlinear operator case. The shape of thedistribution function after each characteristic time (not shown here) is quite similar to the shape recovered in the case of the fullynonlinear operator evolution. The process of dissipation of fine velocity structure is, hence, qualitatively similar if one adoptsnonlinear or linearized operators. However, significant quantitative differences occur: similar profiles in velocity space are indeedreached at very different times, being the characteristic times recovered in the linearized case significantly larger (about 4÷5 times)than the times recovered in the nonlinear operator case.Therefore, from a qualitative point of view, both operators are able to recover the fact that fine velocity space structures are dissipatedfaster as their scale gets finer (i.e. as the velocity space gradients become stronger). However, fine velocity structures are dissipatedslower by linearizing the collisional operator. Moreover, it is also worth mentioning that the amount of entropy growth associated with eachcharacteristic time slightly changes by ignoring nonlinearities. For example, in the case of the fully nonlinear Landau operator, about55% of the total entropy growth is produced when the initial spikes and the successive flat plateau are dissipated. On the other hand,in the linearized operator case, only about the 30% of the total entropy growth is associated with these processes.§.§ Second case study: kinetic turbulence and collisionsTo support the scenario described in the previous section, here we focus on a second initial condition. This initial VDF has been selectedfrom a 2D–3V hybrid Vlasov-Maxwell numerical simulation of decaying turbulence in solar wind like conditions<cit.>. The hybrid Vlasov-Maxwell simulation, whose resolution is N_x=N_y=512 andN_v_x=N_v_y=N_v_z=51, is initialized with a out of the plane background magnetic field. Then, magnetic and bulk speed perturbationsat large, MHD scales are introduced. As a result of nonlinear couplings among the fluctuations, the energy cascades towards smaller kineticscales. Hence, the particle VDF strongly departs from the thermal equilibrium due to the presence of kinetic turbulence and exhibits apotato-like shape similar to the solar wind in-situ observations <cit.>. The omni-directional power spectral densitiesof the magnetic (black) and electric energy (line), evaluated at the time instant where the turbulent activity is maximum, are reported inFig. <ref>(a): clearly a broadband spectrum is recovered. The iso-contour of the initial VDF, selected where non-Maxwellianeffects are strongest <cit.>, is shown in Fig. <ref>(b). The VDF exhibits a hole-like structure in the upper part ofthe box and a thin ring-like structure in the bottom part of the box, while the VDF is clearly elongated on the v_z direction. Compared tothe first case study, the current distribution function reflects the presence of a spectrum of excited wavenumbers and it contains severalkinds of distortions, not only concentrated around the resonant speed as in the previous case.In Paper I we showed that, once velocity space gradients are artificially smoothed out through a fitting procedure, the presence of severalcharacteristic times associated with the dissipation of fine velocity structures is definitively lost. Here, we instead compare theevolution towards the equilibrium of this initial condition under the effect of the fully nonlinear Landau operator [Eq. (<ref>)]and the linearized Landau operator [Eq. (<ref>)]. The velocity domain is here discretized with N_v_x=N_v_y=N_v_z=51points. Note that, compared to the first case study, the resolution is here smaller and it cannot be incremented due to thecomputational cost of the hybrid Vlasov-Maxwell code. Therefore, the quite small velocity scales recovered in the first case study (spikesaround the resonant speed etc. etc.) are not present in this case.Figure <ref> reports the entropy growth obtained with the fully nonlinear Landau operator (black) and with its linearizedversion (red). The entropy growth is, also here, slower in the linearized operator case compared to the fully nonlinear operator case. Toquantify the different evolution observed in Fig. <ref>, we perform the multi-exponential fit <cit.>described in Eq. (<ref>). The analysis performed in the case of the fully nonlinear operator indicates that the entropy grows with two characteristic times:* τ^nl_1 = 0.20 ν_SH^-1→Δ S^nl_1 / Δ S_tot = 26 % * τ^nl_2 = 0.82ν_SH^-1→Δ S^nl_2 / Δ S_tot = 74 %As in the case described in the previous section, each characteristic time is associated with the dissipation of a differentout-of-equilibrium features. Figure <ref> reports the iso-surface of the particle VDF at the time t=T_nl,1=τ^nl_1(left) and at the time t=T_nl,2=τ^nl_1+ τ^nl_2. At t=T_nl,1, the initial hole-like structure and the slight ring-likesignature has been significantly smoothed out. Then, at t=T_nl,2, the approach towards the equilibrium is almost complete, being the VDFshape almost Maxwellian. Only a slight temperature anisotropy, which is finally thermalized in the late stage of the simulation, is stillrecovered. The approach towards the equilibrium confirms that small scale gradients are dissipated quite faster, while the final approachtowards the equilibrium - concerning also the thermalization of temperature anisotropy - occurs on larger characteristic times. In the linearized operator case, two characteristic times are also recovered:* τ^lin_1 = 0.54ν_SH^-1→Δ S^nl_1 / Δ S_tot = 16 % * τ^lin_2 = 2.60ν_SH^-1→Δ S^nl_2 / Δ S_tot = 84 %As described for the first case study, these recovered characteristic times are systematically larger (about three times) compared to thetimes recovered in the fully nonlinear operator case. The amount of entropy growth associated with each characteristic time is alsodifferent, a smaller amount of entropy growth is indeed associated with the fastest characteristic time when nonlinearities areneglected. The results here described confirm the insights described in the previous section. The evolution obtained with the two operatorsis qualitatively similar: in both cases, several characteristic times are recovered in the entropy growth and these characteristic times areassociated with the dissipation of different velocity space structures. However, the observed evolutions are different from a quantitativepoint of view: the recovered characteristic times are much different in the two cases, being significantly larger in the case of thelinearized operator. As described in Paper I, in the first case study, much smaller characteristic times are in general recovered compared to the secondcase study, probably since the numerical resolution in the second case study is about 30 times smaller compared to the first case studyand the sharp velocity gradients present in the first case study [Fig. <ref>(c)] are not accessible in the second case study [Fig.<ref>(b)]. The presence of finer velocity structures in the first case compared to the second case introduces smallercharacteristic times.§ CONCLUSIONTo summarize, here we discussed in detail the importance of considering collisions in the description of the weakly collisional plasmas.Collisions are enhanced by the presence of fine velocity space structures, such as the ones naturally generated by kinetic turbulencein the solar wind; therefore, they could play a role into the conversion of VDFs free energy into heat, by means of irreversible processes. In particular, we focused on the importance of retaining nonlinearities in the collisional operator by performing a comparative analysis ofthe collisional relaxation of a out-of-equilibrium initial VDF. Collisions have been modeled by means of two collisional operators: thefully nonlinear Landau operator and a linearized Landau operator. Due to the demanding computational cost of the collisional integral, werestricted to the collisional relaxation in a force-free homogeneous plasma. Our results must be clearly extended to the more general,self-consistent case; however, performing a high-resolution collisional simulation cannot be currently afforded.The cases of study here analyzed indicate that both nonlinear and linearized collisional operators are able to detect the presence ofseveral time scales associated with the collisional dissipation of small velocity scales in the particle VDF. A possible explanation ofthis behavior is that also the linearized operator involves gradients in its structure while it does not describe the “second-order”gradients related to the Fokker-Planck coefficients of the operator; therefore it is able to recover the presence of several characteristictimes. The general message given in Paper I, namely the presence of sharp velocity space gradients speeds up the entropy growth ofthe system, is confirmed also in the case of the linearized operator: indeed, the fastest recovered characteristic times are significantlysmaller than the common Spitzer-Harm collisional time <cit.>. However, we would point out that the importance of the fine velocity structures is weakened if nonlinearities are ignored in the collisionaloperator. In the case of a linearized collisional operator, slower characteristic times are systematically recovered with respect to thenonlinear operator case. This indicates that, when one neglects the nonlinearities of the collisional integral, fine velocity structures aredissipated slower. Therefore, to properly address the role of collisions and to attribute them the correct relevance with respect to otherphysical processes <cit.>, nonlinearities should be explicitly considered. § ACKNOWLEDGEMENTSDr. O. Pezzi would sincerely thank Prof. P. Veltri, Dr. F. Valentini and Dr. D. Perrone for the fruitful discussions which significantlycontributed to the construction of this work. Dr. O. Pezzi would also thank the anonymous Referees for their suggestions which improved thequality of this work. Numerical simulations here discussed have been run on the Fermi parallel machine at Cineca (Italy), within theIscra–C project IsC26–COLTURBO and on the Newton parallel machine at University of Calabria (Rende, Italy). This work has been supportedby the Agenzia Spaziale Italiana underthe Contract No. ASI-INAF 2015-039-R.O “Missione M4 di ESA: Partecipazione Italiana alla fase diassessment della missione THOR”. jpp 99 natexlab#1#1[Akhiezer (1986)]akhiezer86Akhiezer, A.I., Akhiezer, I.A., Polovin, R.V., Sitenko, A.G. & Stepanov, K.N. 1986. Plasma electrodynamics (Vol. 1), (Pergamon Press, Oxford, 1975).[Alexandrova (2008)]AlexandrovaEA08Alexandrova, O., Carbone, V., Veltri, P. & Sorriso-Valvo, L. 2008 Small-scale energy cascade of the solar windturbulence. The Astrophysical Journal 674, 1153.[Anderegg (2009a)]anderegg09aAnderegg, F., Driscoll, C.F., Dubin, D.H.E. & O’Neil, T.M. 2009 Measurement of correlation-enhanced collision rates. Phys. Rev. Lett. 102(9), 095001.[Anderegg (2009b)]anderegg09bAnderegg, F., Driscoll, C.F., Dubin, D.H.E., O’Neil, T.M. & Valentini, F. 2009 Electron acoustic waves in pure ion plasmas a). Phys. Plasmas 16(5), 055705.[Anderson & O'Neil (2007a)]anderson07a Anderson, M. W. & O'Neil, T. M. 2007 (a) Eigenfunctions and eigenvalues of the Dougherty collision operator. Phys.Plasmas 14, 052103:1–4.[Anderson & O'Neil (2007b)]anderson07b Anderson, M. W. & O'Neil, T. M. 2007 (b) Collisional damping of plasma waves on a pure electron plasma column. Phys.Plasmas 14, 112110:1–8.[Balescu(1960)]balescu60 Balescu, R. 1960 Irreversible processes in ionized gases. Phys. Fluids 3.1, 52.[Banón Navarro (2016)]banonnavarro16 Banón Navarro, A., Teaca, B., Told, D., Groselj, D., Crandall, P., & Jenko, F. 2016 Structure of Plasma Heatingin Gyrokinetic Alfvénic Turbulence. Phys. Rev. Lett. 117, 245101 (2016).[Belmont (2008)]belmont08Belmont, G., Mottez, F., Chust, T. & Hess, S. 2008 Existence of non-Landau solutions for Langmuir waves. Phys.Plasmas 15(5), 052310. [Bernstein (1957)]bgk57 Bernstein, I.B., Greene, J.M. & Kruskal, M.D. 1957 Exact nonlinear plasma oscillations. Phys. Rev. 108(3), 546.[Bhatnagar (1954)]bgk54 Bhatnagar, P. L., Gross, E. P. & Krook, M. 1954 A model for collision processes in gases. I. Small amplitude processes incharged and neutral one-component systems. Phys. Rev. 94, 511–525.[Bobylev & Potapenko(2013)]bobylev13 Bobylev, A.V. & Potapenko, I.F. 2013 Monte Carlo methods and their analysis for Coulomb collisions in multicomponentplasmas. J. Comp. Phys. 246, 123–144.[Bruno & Carbone(2013)]bruno13Bruno, R. & Carbone, V. 2013 The solar wind as a turbulence laboratory. Living Reviews in Solar Physics 10,1–208.[Camporeale & Burgess(2011)]camporeale11Camporeale, E. & Burgess, D. 2011 The dissipation of solar wind turbulent fluctuations at electron scales. TheAstrophysical Journal 730, 114.[Chandrasekhar(1956)]chandrasekhar56Chandrasekhar, S. 1956 On force-free magnetic fields. Proceedings of the National Academy of Sciences 42, 273. [Chapman & Ferraro(1930)]chapman30Chapman, S., & Ferraro, V.C.A. 1930 A new theory of magnetic storms. Nature 126.3169, 129-130.[Chapman & Ferraro(1931)]chapman31Chapman, S. & Ferraro, V.C.A. 1931 A new theory of magnetic storms. Terrestrial Magnetism and AtmosphericElectricity 36.2, 77-97.[Chust (2009)]chust09T. Chust, G. Belmont, F. Mottez and S. Hess 2009 Landau and non-Landau linear damping: Physics of the dissipation. Phys. Plasmas 16(9), 092104. [Cranmer (2009)]cranmer09 Cranmer, S.R., Matthaeus, W.H., Breech, B.A. & Kasper, J.C. 2009 Empirical Constraints on proton and electronheating in the fast solar wind. The Astrophysical Journal 702, 1604.[Curtis (1970)]curtis70 Curtis, L.J., Berry, H.G. & Bromander, J. 1970 Analysis of multi-exponential decay curves. Physica Scripta 2(4-5), 216.[Daughton (2009)]daughton09 Daughton, W., Roytershteyn, V., Albright, B. J., Karimabadi, H., Yin, L., & Bowers, K. J. 2009 Transition fromcollisional to kinetic regimes in large-scale reconnection layers. Physical review letters 103(6), 065004.[Dobrowolny (1980a)]dobrowolny80a Dobrowolny, M., Mangeney, A. & Veltri, P. 1980 Fully developed anisotropic hydromagnetic turbulence in interplanetaryspace. Phys. Rev. Lett. 45, 144.[Dobrowolny (1980b)]dobrowolny80bDobrowolny, M., Mangeney, A. & Veltri, P. 1980 Properties of magnetohydrodynamic turbulence in the solar wind. Solarand Interplanetary Dynamics (Springer, Netherlands).[Dougherty(1964)]dougherty64 Dougherty, J. K. 1964 Model Fokker-Planck Equation for a Plasma and Its Solution. Phys. Fluids 7, 113–133.[Dougherty & Watson (1967)]dougherty67 Dougherty, J. K. & Watson, S. R. 1967 Model Fokker-Planck Equations: Part 2. The equation for a multicomponent plasma. J. Plasma. Phys. 1, 317–326.[Elsässer(1950)]elsasser50Elsässer, W.M. 1950 The hydromagnetic equations. Phys. Rev. 79, 183.[Escande (2015)]escande15Escande, D.F., Elskens,Y. & Doveil, F. 2015 Uniform derivation of Coulomb collisional transport thanks to Debyeshielding. J. Plasma Phys 81(1), 305810101.[Filbet & Pareschi (2002)]filbet02Filbet, F. & Pareschi, L. 2002 A numerical method for the accurate solution of the Fokker–Planck–Landau equation in the nonhomogeneous case. J. Comput. Phys. 179, 1–26.[Franci (2015)]franci15 Franci, L., Verdini, A., Matteini, L., Landi, S. & Hellinger, P. 2015 Solar wind turbulence from MHD to sub-ion scales:high resolution hybrid simulations. The Astrophysical Journal Letters 804, L39.[Frisch(1995)]frisch95 Frisch, U.P. 1995 Turbulence: the legacy of A.N. Kolmogorov, (Cambridge Univ. Press, 1995).[Gary(1993)]gary93 Gary, S.P. 1993 Theory of space plasma microinstabilities, (New York, Cambridge Univ. Press, 1993).[Gary (2010)]GaryEA10Gary, S.P., Saito, S. & Narita, Y. 2010 Whistler turbulence wavevector anisotropies: Particle-in-cell simulations. The Astrophysical Journal 716, 1332.[Goldstein (1996)]goldstein96Goldstein, B.E., Neugebauer, M., Phillips, J.L., Bame, S., Gosling, J.T., McComas, D., Wang, Y.M., Sheeley, N.R. &Suess, S.T. 1996 Ulysses plasma parameters: latitudinal, radial and temporal variations. Astron. Astrophys. 316,296.[Greco (2012)]greco12Greco, A., Valentini, F., Servidio, S. & Matthaeus, W.H. 2012 Inhomogeneous kinetic effects related to intermittentmagnetic discontinuities. Physics Review E 86, 066405.[He (2015)]he15 He, J., Tu, C., Marsch, E., Chen, C.H.K., Wang, L., Pei, Z., Zhang, L., Salem C.S. & Bale, S.D. 2015 Proton heating insolar wind compressible turbulence with collisions between counter-propagating waves. The Astrophysical Journal Letters83, L30.[Hernandez & Marsch(1985)]hernandez85Hernandez, R. & Marsch, E. 1985 Collisional time scales for temperature and velocity exchange between driftingMaxwellians. J. Geop. Research 90(A11), 11062–11066.[Hirvijoki (2016)]hirvijoki16 Hirvijoki, E., Lingam, M., Pfefferlé, D, Comisso, L., Candy, J. & Bhattacharjee, A. 2016 Fluid moments of the nonlinear Landaucollision operator Physics of Plasmas 23, 080701.[Holloway & Dorning(1991)]holloway91Holloway, J.P. & Dorning, J.J. 1991 Undamped plasma waves. Phys. Rev. A 44(6), 3856.[Howes & Nielson(2013)]howes13 Howes, G.G. & Nielson, K.D. 2013 Alfvén wave collisions, the fundamental building block of plasma turbulence. I.Asymptotic solution. Phys. Plasmas 20, 072302.[Iroshnikov(1964)]iroshnikov64Iroshnikov, R.S. 1964 Turbulence of a conducting fluid in a strong magnetic field. Soviet Astron. 7, 566.[Johnston (2009)]johnston09Johnston, T.W., Tyshetskiy, Y., Ghizzo, A. & Bertrand, P. 2009 Persistent subplasma-frequency kinetic electrostaticelectron nonlinear waves. Phys. Plasmas 16(4), 042105.[Kabantsev (2006)]kabantsev06Kabantsev, A.A., Valentini, F. & Driscoll, C.F. 2006 Experimental Investigation of Electron-Acoustic Waves in ElectronPlasmas. Non-Neutral Plasma Physics VI 862, 13–18.[Kasper (2008)]kasper08 Kasper, J.C., Lazarus, A.J., & Gary, S.P. 2008 Wind/SWE observations of firehose constraint on solar wind protontemperature anisotropy. Geophys. Res. Lett. 29, 20.[Kraichnan(1965)]kraichnan65Kraichnan, R.H. 1965 Inertial‐range spectrum of hydromagnetic turbulence. Phys. Fluids 8, 1385–1387.[Landau(1936)]landau36Landau, L. D. 1936 The Transport Equation in the Case of the Coulomb Interaction. In Collected papers of L. D. Landau,pp. 163–170. Pergamon Press.[Lenard(1960)]lenard60Lenard A. 1960 On Bogoliubov's kinetic equation for a spatially homogeneous plasma. Annals Physics 10.3, 390.[Lesur (2014)]lesur14Lesur, M., Diamond, P.H. & Kosuga, Y. 2014 Nonlinear current-driven ion-acoustic instability driven by phase-spacestructure. Phys. Plasmas 21, 112307.[Livi & Marsch (1986)]livi86Livi, S. & Marsch, E. 1986 Comparison of the Bhatnagar-Gross-Krook approximation with the exact Coulomb collision operator.Phys. Rev. A 34, 533–540.[Lu (2005)]lu05 Lu, Q.M., Wang, D.Y. & Wang, S. 2005 Generation mechanism of electrostatic solitary structures in the Earth’s auroral region. J. Geophys. Res. 10, A03223.[Marino (2008)]marino08 Marino, R., Sorriso-Valvo, L., Carbone, V., Noullez, A., Bruno, R. & Bavassano, B. 2008 Heating the Solar Wind by aMagnetohydrodynamic Turbulent Energy Cascade. The Astrophysical Journal Letters 667, L71–L74. [Marsch (1982)]marsch82 Marsch, E., Muhlhauser, K.H., Schwenn, R., Rosenbauer, H., Pilipp, W. & Neubauer, F.M. 1982 Solar wind protons:Three-dimensional velocity distributions and derived plasma parameters measured between 0.3 and 1 AU. J. Geop.Research 87, 52. [Marsch(2006)]marsch06 Marsch, E. 2006 Kinetic physics of the solar corona and solar wind. Living Rev. Sol. Phys. 3, 1–100. [Maruca (2011)]maruca11Maruca, B.A., Kasper, J.C. & Bale, S.D. 2011 What are the relative roles of heating and cooling in generatingsolar wind temperature anisotropies? Phys. Rev. Lett. 107, 201101.[Maruca (2013)]maruca13Maruca, B.A., Bale, S.D., Sorriso-Valvo, L., Kasper, J.C. & Stevens, M.L. 2013 Collisional thermalization of hydrogen andhelium in solar-wind plasma. Phys. Rev. Lett. 111(24), 241101.[Matthaeus (1999)]MatthaeusEA99Matthaeus, W.H., Zank, G.P., Smith, C.W., & Oughton, S. 1999 Turbulence, Spatial Transport, and Heating of the SolarWind. Phys. Rev. Lett. 82, 3444.[Matthaeus (2005)]matthaeus05 Matthaeus, W.H., Dasso, S., Weygand, J.M., Milano, L.J., Smith, C.W. & Kivelson, M.G. 2005 Spatial Correlation of Solar-WindTurbulence from Two-Point Measurements. Phys. Rev. Lett. 95(23), 231101.[Matthaeus (2014)]matthaeus14Matthaeus, W.H., Oughton, S., Osman, K.T., Servidio, S., Wan, M., Gary, S.P., Shay, M.A., Valentini, F., Roytershteyn, V. &Karimabadi, H. 2014 Nonlinear and linear timescales near kinetic scales in solar wind turbulence. The Astrophys. J. 790, 155.[Matthaeus (2015)]matthaeus15Matthaeus, W.H., Wan, M., Servidio, S., Greco, A., Osman, K.T., Oughton, S. & Dmitruk, P. 2015 Intermittency, nonlinear dynamics anddissipation in the solar wind and astrophysical plasmas Phil. Trans. Royal Society of London A: Mathematical, Physical and EngineeringSciences 373, 20140154.[Moffatt(1978)]moffatt78 Moffatt, H.K. 1978 Field Generation in Electrically Conducting Fluids (Cambridge University Press)[Ng & Bhattacharjee(1996)]NgBhattacharjeeNg, C.S., & Bhattacharjee, A. 1996 Interaction of shear-Alfvén wave packets: implication for weak magnetohydrodynamicturbulence in astrophysical plasmas. The Astrophysical Journal 465, 845.[Parashar (2009)]ParasharPP09Parashar, T.N., Shay, M.A., Cassak, P.A. & Matthaeus, W.H. 2009 Kinetic dissipation and anisotropic heating in a turbulentcollisionless plasma. Physics of Plasmas 16, 032310.[Parker(1979)]parker79Parker, E.N. 1979 Cosmical magnetic fields: Their origin and their activity (Oxford University Press)[Perrone (2012)]perrone12Perrone, D., Valentini, F., Servidio, S., Dalena, S. & Veltri, P. 2012 Vlasov simulations of multi-ion plasmaturbulence in the solar wind. The Astrophysical Journal 762, 99.[Pezzi (2013)]pezzi13Pezzi, O., Valentini, F., Perrone, D. & Veltri, P. 2013 Eulerian simulations of collisional effects on electrostatic plasmawaves. Phys. Plasmas 20, 092111:1–11. [Pezzi (2014a)]pezzi14aPezzi, O., Valentini, F., Perrone, D. & Veltri, P. 2014 (a) Erratum: “Eulerian simulations of collisional effects onelectrostatic plasma waves” [Phys. Plasmas 20, 092111 (2013)]. Phys. Plasmas 21, 019901.[Pezzi (2015a)]pezzi15aPezzi, O., Valentini, F. & Veltri, P. 2015(a) Collisional Relaxation: Landau versus Dougherty operator. J. PlasmaPhys. 81, 305810107.[Pezzi (2015b)]pezzi15bPezzi, O., Valentini, F. & Veltri, P. 2015(b) Nonlinear regime of electrostatic waves propagation in presence of electron-electron collisions. Phys. Plasmas 22, 042112.[Pezzi (2016a)]pezzi16aPezzi, O., Valentini, F. & Veltri, P. 2016(a) Collisional Relaxation of Fine Velocity Structures in Plasmas. Phys.Rev. Lett. 116, 145001.[Pezzi (2016b)]pezzi16bPezzi, O., Parashar, T.N., Servidio, S., Valentini, F., Vásconez, C.L., Yang, Y., Malara, F., Matthaeus, W.H. & Veltri,P. 2016(b) Revisiting a classic: the Parker-Moffatt problem The Astrophysical Journal in press.[Sahraoui (2007)]SahraouiEA07Sahraoui, F., Galtier, S., & Belmont, G. 2007 On waves in incompressible Hall magnetohydrodynamics. J. Plasma Phys.73, 723.[Sahraoui (2009)]sahraoui09Sahraoui, F., Goldstein, M.E., Robert, P. & Khotyaintsev, Y.V. 2009 Evidence of a Cascade and Dissipation of Solar-WindTurbulence at the Electron Gyroscale. Phys. Rev. Lett.102, 231102.[Servidio (2012)]servidio12Servidio, S., Valentini, F., Califano, F. & Veltri, P. 2012 Local kinetic effects in two-dimensional plasma turbulence.Phys. Rev. Lett. 108, 045001.[Servidio (2015)]servidio15Servidio, S., Valentini, F., Perrone, D., Greco, A., Califano, F., Matthaeus W.H., & Veltri, P. 2015 A kinetic model ofplasma turbulence. Journal of Plasma Physics 81, 328510107.[Sorriso-Valvo (2007)]sorriso07 Sorriso-Valvo, L., Marino, R., Carbone, V., Noullez, A., Lepreti, F., Veltri, P., Bruno, R., Bavassano, B. & Pietropaolo,E. 2007 Observation of Inertial Energy Cascade in Interplanetary Space Plasma. Phys. Rev. Lett. 99, 115001.[Spitzer(1956)]spitzer56Spitzer Jr, L. 1956 Physics of Fully Ionized Gases. Interscience Publishers.[Tigik (2016)]tigik16 Tigik, S.F., Ziebell, L.F., Yoon, P.H. & Kontar, E.P. 2016 Two-dimensional time evolution of beam-plasma instability in thepresence of binary collisions. Astronomy & Astrophysics 586, A19.[Tokar & Gary(1984)]tokar84 Tokar, R.L. & Gary, S.P. 1984 Electrostatic hiss and the beam driven electron acoustic instability in the dayside polar cusp. Geophys. Res. Lett. 11, 1180-1183.[Vaivads (2016)]vaivads16Vaivads, A., Retinó, A., Soucek, J., Khotyaintsev, Yu.V., Valentini, F., Escoubet, C.P.2016 TurbulenceHeating ObserveR - satellite mission proposal. J. Plasma Physics 82, 905820501.[Valentini (2006)]valentini06Valentini, F., O’Neil, T.M. & Dubin, D.H.E. 2006 Excitation of nonlinear electron acoustic waves. Phys. Plasmas 13(5), 052303.[Valentini (2007)]valentini07Valentini, F., Travnicek, P., Califano, F., Hellinger, P. & Mangeney, A. 2007 A hybrid-Vlasov model based on the currentadvance method for the simulation of collisionless magnetized plasma. Journal of Computational Physics 225, 753.[Valentini (2011)]valentini11Valentini, F., Califano, F., Perrone, D., Pegoraro, F. & Veltri, P. 2011 New ion-wave path in the energy cascade. Phys. Rev. Lett. 106, 165002.[Valentini (2012)]valentini12Valentini, F., Perrone, D., Califano, F., Pegoraro, F., Veltri, P., Morrison, P.J. & O'Neil, T.M.. 2012 Undampedelectrostatic plasma waves. Phys. Plasmas 20, 034701.[Valentini (2014)]valentini14Valentini, F., Servidio, S., Perrone, D., Califano, F., Matthaeus, W.H. & Veltri, P.. 2014 Hybrid Vlasov-Maxwellsimulations of two-dimensional turbulence in plasmas. Phys. Plasmas 21, 082307.[Valentini (2016)]valentini16 Valentini, F., Perrone, D., Stabile, S., Pezzi, O., Servidio, S., De Marco, R., ... & Consolini, G. 2016 Differentialkinetic dynamics and heating of ions in the turbulent solar wind. New Journal of Physics 18(12), 125001.[Verdini (2009)]verdini09 Verdini, A., Velli, M., & Buchlin, E. 2009 Turbulence in the sub-Alfvénic solar wind driven by reflection oflow-frequency Alfvén waves. The Astrophys. Journal Letters 700, L39.[Villani(2002)]villani02 Villani, C. 2002 Handbook of Mathematical Fluid Dynamics. A review of mathematical topics in collisionalkinetic theory 1, 71–305.[Watanabe & Taniuti(1977)]watanabe77 Watanabe, K. & Taniuti, T. 1977 Electron-acoustic mode in a plasma of two-temperature electrons. Journal of the Physical Societyof Japan 43, 1819.
http://arxiv.org/abs/1704.08020v1
{ "authors": [ "O. Pezzi" ], "categories": [ "physics.plasm-ph", "astro-ph.SR", "physics.space-ph" ], "primary_category": "physics.plasm-ph", "published": "20170426090747", "title": "Solar wind collisional heating" }
firstpage–lastpage Thermodynamics and Phase transition from regular Bardeen black hole Mahamat Saleh Bouetou Bouetou Thomas Timoleon Crepin Kofane Received: date / Accepted: date =========================================================================The wide-area XMM-XXL X-ray survey is used to explore the fraction of obscuredAGN at high accretion luminosities, L_X( 2-10 keV)10^44 erg s^-1, and outto redshift z≈1.5. Thesample covers an area ofabout 14 deg^2 and provides constraints on the spacedensity of powerful AGN over a wide rangeofneutralhydrogencolumn densitiesextendingbeyondthe Compton-thicklimit, N_H≈10^24 cm^-2. The fraction of obscuredCompton-thin (N_H=10^22-10^24 cm^-2)AGN is estimatedto be≈0.35 forluminosities L_X( 2-10 keV)>10^44 erg s^-1independentofredshift. Forless luminous sourcesthe fraction of obscuredCompton-thin AGN increases from 0.45±0.10 at z=0.25to 0.75±0.05 at z=1.25.Studies thatselect AGN inthe infraredvia templatefits tothe observed Spectral EnergyDistribution of extragalacticsources estimate space densities athigh accretion luminosities consistentwith the XMM-XXL constraints.There is no evidence for a large population of AGN (e.g. heavilyobscured) identifiedinthe infraredandmissed atX-ray wavelengths.Wefurther explore the mid-infraredcolours of XMM-XXL AGNasafunctionofaccretionluminosity,columndensityand redshift. ThefractionofXMM-XXLsources thatliewithinthe mid-infrared colour wedges defined inthe literature to select AGN is primarily a function ofredshift.This fraction increases from about 20-30% at z=0.25 to about 50-70% at z=1.5.galaxies: active, galaxies: Seyfert, quasars: general, X-rays: diffuse background§ INTRODUCTIONIt is now well established that the dominant fraction of the growth of supermassive blackholes at thecentres of galaxies istaking place behinddense dustandgasobscuring cloud. Theblack holemass density estimated by integrating the luminosity function of unobscured UV/optically-selected QSOs for example, falls short of the local black holemassfunction<cit.>.Underthe assumptionthat the dominantmode of black hole growth is accretion, this implies large numbers of obscured Active Galactic Nuclei (AGN) that are missing from UV/optical surveys. This isalso supported by studiesof the compositionof the diffuse X-raybackgroundradiation(XRB)thatrepresentstheintegrated emission of AGNover cosmic time.A largefraction of obscured AGN, includingsomedeeply buriedones,areneededto synthesisethe observed XRB spectrum inthe energy interval ≈ 2-100 keV <cit.>. Observational constraints on the distribution of the obscurationlevel for AGN are therefore essential to understand in detail the accretion history of the Universe and also compileunbiased AGN samplesto explorethe relationbetween black hole growth and galaxy evolution <cit.>.X-ray spectroscopy provides adirect measure of the line-of-sight gas obscuration ofAGN, parametrised by the equivalentcolumn density of neutral hydrogen,N_H.As a result,high-energy observations have been usedextensively to estimate thefraction of obscuredAGN as a function of redshiftand accretion luminosity. Follow-up observations of local AGN samples selectedin the mid-IR <cit.>,hard X-rays <cit.>or via nuclearoptical emission lines <cit.>, measure obscured(N_H>10^22 cm^-2) AGNfractionsof≈ 60%. These studiesalso find a population ofdeeply buried sources withhydrogen columndensitiesabove theThomson scatteringlimit (N_H10^24 cm^-2; Compton-thick), which represent ≈30-50% of the overall obscured population.Outside the localUniverse XMM-Newton,Chandra <cit.>and morerecently NuSTAR <cit.>surveys provide a rich dataset forstudies of the obscuration distributionof AGN.Nuclear obscurationhas beenshowntobe lesscommonamong powerfulAGN <cit.>, possibly indicatingthe impact of black holerelated outflows, which sweep awaygas and dustclouds in luminoussources <cit.>. At lowaccretion luminosities thereis also evidencefor adecrease inthe incidenceof obscurationamong AGN <cit.>, a trend whichmayrelate totheefficiencyofcloud formationwhenthe radiative outputof accreting supermassive black holesdrops below a certain limit. The obscuredAGN fraction integratedover accretion luminositymay alsoincreasewith redshiftatleast outto z≈1.5-2 <cit.>, a trend which may be linked tothe overall increase of the gas and dust contentof galaxies atearlier epochs<cit.>.The number ofCompton-thick sources among AGN is still debated with different studiesfinding discrepant results in terms of bothspacedensityandcosmologicalevolution <cit.>.Currentresults on theobscuration distributionof AGNoutside the local Universeare largelybased on relativelysmall areaand deep X-ray surveys. Becauseof the form of theX-ray luminosity function, these datasetsare dominated by low andmoderate luminosity sources. Constraintsat high accretionluminosities, closeto andabove the knee of theX-ray luminosity function <cit.>, are therefore affected by Poisson noise.There are for example, only 14 AGN with L_X(2 -10keV)>10^44ergs^-1and z<1in thedeep Chandra survey sourcecatalogues compiled by <cit.>. Because brightsources are rare,wide-area surveys arerequired for statistically largesamples.Such luminous sourcesare important as theydominate theaccretion historyofthe Universeand mayalso represent an interesting phase of black hole growth, when AGN feedback processes are sufficiently violent tohave an impact on the evolution of their host galaxies <cit.>.Despite the large numberof deepand relativelysmall area(2deg^2) X-ray surveys carried outby Chandra orXMM-Newton <cit.>, there are currently only a few wide-area X-ray samples.These include the Chandra survey in the Boötes field <cit.>,theXMM-XXL <cit.>and theX-raySurvey inSDSS Stripe 82<cit.>.In thispaper we presentresults on the obscurationdistribution of luminous [L_X(2-10 keV)10^44erg s^-1]AGN to z≈1.5using one ofthe largestcontiguous X-raysurveys to date,the XMM-XXL<cit.>. Thisdataset, likeany X-ray selected sample, includes sources with a small number of photons. This poses achallenge tostudies that attemptto inferAGN parameters, such as level of line-of-sight obscuration or accretion luminosity, by modeling theirX-ray spectra.In ourwork we addressthis issue by making realistic assumptions onthe adopted X-ray spectral models and usingrobuststatistical methodsforunbiasedestimates ofX-ray spectral parametersand their associated uncertainties.We show that the XMM-XXL despite its relatively shallow depth (typically 10 ks per XMM pointing) provides new measurements of the AGN space density at luminosities L_X(2 -10keV)10^44erg s^-1 over a wide range of line-of-sight hydrogen column densities extending above the Compton-thick limit.Surveyslike the XMM-XXL, in terms of depth and area, provide anexcellent complement to deep X-ray samples forcharacterising thestatisticalproperties ofAGNover awide luminosity and obscuration baselines.In the calculations that follow we adoptcosmological parametersH_0= 70kms^-1Mpc^-1, Ω_M = 0.3, Ω_Λ = 0.7.§ DATA ANALYSIS AND PRODUCTS§.§ X-ray source catalogue The X-raysource catalogue used inthis paper isextracted from the equatorialfield (α≈2^h 16^m,δ≈ -4 52) oftheXMM-XXL survey<cit.> followingthe X-ray datareduction andanalysis stepsdescribed in <cit.>. In therest of this paper werefer to the equatorial subregionofthe XMM-XXL presentedby <cit.>as XMM-XXL-N. The mostimportant detailsof theX-raydata analysis methodology are outlined below.For the reduction of theXMM observations of the equatorial XXL field the XMM ScienceAnalysis System (SAS) version 12 is used. Event files forthe EPIC <cit.> PNand MOS detectors are produced usingtheepchainandemchaintasksofsas respectively. High particle background intervals associated with solar flares are filtered out to produceclean event files as well as X-ray images and exposure maps in five energy bands (0.5-8, full; 0.5-2, soft;2-8,hard;5-8,very hard;7.5-12 keV,ultra-hard). X-ray Sources are independently detected in each of the spectral bands above byapplying a Poisson false-detectionprobability threshold of P<4×10^-6.The XMMastrometric frame is refined using theeposcorrtaskofsasandadoptingasexternal astrometricreference theSDSS <cit.> DR8catalogue <cit.>.The resulting positionalaccuracy ofX-raysources isabout 15(1σ rms). The finalX-raysource catalogueconsistsof 8,445unique sources detected in at least one of the above spectral bands.In this paper we focus on the hard band (2-8 keV) selected sample which has a total of 2768 unique sources.The sensitivity of the surveyis estimated using methods described in <cit.> and<cit.>. Figure<ref>plotstheXMM-XXL-N areacurveasa function ofEPIC PN count rate.This curve measuresthe total X-ray survey area that is sensitive tosources of a given count rate in the 2-8 keV (hard) band.§.§ Optical and near-infrared photometry <cit.> present optical identifications of the X-ray sources in the equatorial fieldof the XMM-XXL-N survey fieldwith the SDSS-DR8 photometric catalogue <cit.>.Inthis paper we cross-correlatethe X-raysource positionswith thedeeper optical photometric catalogue (ugriz bands; AB limiting magnitude r ≈ 24.9 mag)oftheCanada-FranceHawaii TelescopeLensingSurvey <cit.>.The association of X-ray sourceswith counterparts wasbased onthe likelihoodratio method <cit.>.Potential counterpartsare searched forwithin a radiusof 45 fromthe X-raycentroid,i.e. outtoa distancethree timesthe 1σX-raypositional rmsuncertainty. Foreachpossible identificationwemeasurethelikelihood ratio,LR,betweenthe probabilitythat thesource, atthe givendistance fromthe X-ray position and with the given optical magnitude, is the true counterpart and the probability that the source is a spurious alignment LR=q(m) f(r)/n(m), whereq(m) representsthe magnitude distributionof the trueoptical counterpartsatagiven wavebandandf(r) isthe probability of findingthe true counterpart at distancer from the X-raycentroid, giventhetypical positionaluncertainties ofthe X-rayand opticalcatalogues. n(m)is thebackgrounddensity of galaxieswith magnitudemat agivenwaveband.Foreach X-ray/opticalsourcepair theLRisestimatedseparately inthe ugriz optical bandsof the CFHTLenS. The pairis then assigned the maximum LR of the 5 wavebands.Weapproximatef(r)bythenormaldistributionwithstandard deviation 15.In this calculation we assume that the positional uncertaintyof theoptical catalogueis muchsmaller thanthat of X-raysources.Theprobability q(m)is estimatedby subtracting from themagnitude distribution ofall possible counterpartsof all X-raysourceswithinasearchradiusof45theexpected magnitudedistribution ofbackground/foreground galaxiesand stars. For the latter, we randomly place 45 radius apertures within the surveyareatoconstructthe expectedmagnitudedistributionof optical sources in random sight-lines.We assess how secure the optical counterpart of an X-ray source is via thereliability parameterdefined by <cit.> Rel =LR/∑_j LR_j+(1-Q), wherethe summation is over allthe potential counterparts of theX-ray source within thesearch radius.Q isthe fraction of X-raysources withidentifications tothe limitingmagnitudeof a givenCFHTLenSwaveband. Wedefineassamplecompletenessthe fraction ofX-ray sources with counterparts abovea given likelihood threshold, LR>LR_th. Sample reliability is then themean Rel of the all counterparts with LR>LR_th.We choose the value LR_th thatmaximisesthesum ofthesamplecompletenessandsample reliability. We adopt LR_th=0.4 andfind a total of 4673 CFHTLenS counterparts out of5914 XMM-XXL-N X-ray sources thatlie within the CFHTLenSfootprint(≈ 80 %identificationrate),i.e. accounting for thegeometry of the optical surveyand masked regions associatedwith brightstars. Inthecase ofthehard-band (2-8 keV) selected sample, which isthe focus of this paper, there are1778 CFHTLenS counterpartsout of2018 XMM-XXL-NX-ray sources that lie within the CFHTLenS footprint (≈ 88 % identificationrate).The expectedspurious identificationrate is about 5%.Inthe analysis that follows weuse the 2018 XMM-XXL-N X-ray sources that overlap with the CFHTLenS footprint.For theestimation ofphotometric redshifts(see below)we complemented the CFHTLenS optical data with far-UV, near-infrared, and mid-infraredphotometryavailable intheXMM-XXL-Nfield. The positionsoftheCFHTLenScounterpartsarecross-matchedwithin 35 to GALEX (GalaxyEvolution Explorer) sources using the GALEX All-SkyImaging SurveyDataRelease 5 <cit.>. The probabilityof a spurious GALEX sourcewithin the searchradius isabout 1.5%. Near-infrared photometry is from publicextragalactic surveys that are been carried outbytheVisibleandInfrared SurveyTelescopeforAstronomy <cit.>,theEuropeanSouthernObservatory's (ESO) 4-m class telescope dedicatedto imaging the southern sky.The VISTADeep ExtragalacticObservations survey <cit.> provides photometry in the ZYJHKs bands to the depthKs(AB)≈24 mag over a total areaof 12 deg^2 split in threedistinct fields. The VISTA HemisphereSurvey (VHS; PI McMahon, Cambridge, UK) will image the entire southern sky in at least the J and Ksfilters to a limiting magnitude of Ks(AB)≈20 mag,withcertain regionsofthesurvey, likethe XMM-XXL-N, also observedin the H-band.The datawe used are from the4th DataRelease (DR4)ofthe VIDEOsurvey andthe 3rdData Release (DR3) of the VHSproduced by the Vista Science Archive (VSA), which maintainsdata products generated by theVISTA InfraRed CAMera (VIRCAM).The VISTA DataFlow System pipeline processing and science archivearedescribedin<cit.>,<cit.>and <cit.>.The VIDEO-DR4covers about 3.6 deg^2 within theXMM-XXL-Nsurvey field,whiletheVHS-DR3covers theentire field.The search radiusfor cross-matching CFHTLenS positions with VIDEO-DR4 and VHS-DR3 counterparts isfixed to 15.For 99% of theassociations theradial offsetsbetweenthe opticaland near-infrared positions areless than 05.At thedepth of the VIDEO-DR4survey (Ks≈24 mag) theprobability ofa spurious near-infraredgalaxy within 05is 2%. The AllWISEdata are used to compile mid-infrared photometry in the WISE W1, W2, W3, and W4 bands with centralwavelengths of 3.4, 4.6, 12, and22 μ m respectively.The X-ray/AllWISE identification follows the likelihood ratiomethod asdescribedin <cit.>. Finally, wealso cross-match the positions of the optical counterparts of X-ray sources tothe SWIRE(Spitzer Wide-areaInfraRed Extragalactic)DR2 source catalogue <cit.>using a search radiusof 15.The Spitzer/SWIREsurvey coversabout9 deg^2in theXMM-XXL-N field.TheSpitzer photometryis not usedin thedetermination of photometricredshifts butonlyto explorethepositions ofX-ray sources on the Spizter mid-infrared colour-colour diagrams proposed in theliteratureto selectAGN(seesection <ref>). A summary ofthe multiwavelengthproperties of theXMM-XXL-N 2-8 keV selected sample is presented in Table <ref>.§.§ Optical spectroscopy There area number ofspectroscopic surveys in theXMM-XXL-N survey areatargetingvarious classesofextragalactic populations,both galaxies andAGN.Adedicated follow-up spectroscopicprogramme of X-ray sources in the XMM-XXL-N equatorial field was carried out by the Sloantelescope aspart oftheSDSS-III ancillaryprogramme <cit.>. A detaileddescription ofthesedata including visualinspection andredshift qualityassessment arepresented in <cit.>. The XXLequatorialfield alsolies withinthe footprintofthe SDSS-IIIBaryonOscillation SpectroscopicSurvey (BOSS; Dawsonet al.2013)programme, which providesredshifts for UV/optically-selectedbroad-lineQSOsandluminousredgalaxies. <cit.> presents redshifts for X-ray sources in the original XMM-LSS surveyfield <cit.>, whichis part ofthe larger XMM-XXLsurvey. TheVIMOSPublicExtragalacticRedshiftSurvey <cit.>targetsgalaxies toa magnitude limitof i(AB)=22.5 mag,which are pre-selectedon the basis of their optical colours to maximise the number of spectroscopic identificationsin the redshiftinterval z= 0.5-1.2. The first public data releaseof the VIPERS includes atotal of 30,523 sources in the equatorialXMM-XXL-N field and covers about5 deg^2 of theX-ray surveyarea. The VIMOSVLT DeepSurvey <cit.>provides spectroscopy forabout 10 000 optically-selected sources to i(AB) =24 mag over an area of about 0.5 deg^2 withintheXXL equatorialfield. The ESOlarge programme180.A-0776 (P.I.O. Almaini) followedwith spectroscopy about3 000galaxiestoK(AB)=23 mag selectedintheUKIDSS Ultradeep Survey Field (UDS). A description of these observations is presented in <cit.> and <cit.>.Thespectroscopiccatalogues arematchedtothe CFHTLenSoptical positionswithin aradius of1 arcsec.Weselectredshifts with qualityflagsinthepublishedcatalogues fromwhichtheywere retrievedthat indicatea probabilitybetter than≈90% of being correct.There are 2311 redshift measurements out of 4673 X-ray sources withCFHTLenS identifications.The majority(2012) are from the SDSS spectroscopic programmes. There are a further 129 redshifts from<cit.>, 91from theUDS follow-upspectroscopy, 62 fromVIPERS and17from VVDS. Thenumber ofspectroscopic identifications of the XMM-XXL-N 2-8 keV selected sample is listed in Table <ref>and their breakdown tooptically resolved and unresolved sources is shown in Table <ref>.§.§ Photometric redshiftsFor X-ray sources without optical spectroscopy we estimate photometric redshifts using the LePhare code <cit.>. Weapply different cuts to the X-ray selected sample to optimise the template fits to the observedSpectral EnergyDistributions (SEDs)of certaingroups of sources, andimprove the overall quality ofthe photometric redshift estimates. Wepresent resultsseparatelyforX-ray sourcesthat overlap with the VIDEO survey area and therefore benefit from the deep near-infraredphotometryof thatprogramme. Differenttemplate librariesareapplied toX-raysourcesassociated withoptically resolved (extended opticallight profile) and unresolved (point-like) counterparts.This is motivated by the work of <cit.> who found thatthe morphological information ofthe optical counterparts of X-raysources canbe used asa prior whencomputing photometric redshifts.X-rayAGN associated with opticallyextended sources are typically atlow andmoderate redshifts andhave SEDs witha large contributionfromthehostgalaxy. Point-likesourceshaveSEDs dominated by emissionfrom the central engine andare likely to lie, onaverage,athigherredshifts. In additiontothedifferent template library for each morphologicalclass,we follow <cit.>andimpose aB-bandabsolute magnitudeprior -30<M_B<-20 magwhen estimating thephotometric redshiftsof the optically unresolvedsample. X-ray sourceswith stellarity parameter in the publicly availableCFHTLenS catalogues class_star > 0.8 are considered optically unresolved, and are likely to be dominated by emission fromthe AGN inthe optical bands. It is alsofound that usingdifferenttemplatesforsamplessplit byX-rayfluxalso improves the photometric redshift results. Previous studies have shown that thetemplate librariesused to determinephotometric redshifts forX-ray selectedAGN inbrightsurveys <cit.>cannot be appliedto sourcesdetected at fainter flux limits in deepersamples, such as the 4 Ms Chandra Deep Field South <cit.>.Themodeltemplatelibrariesthatareappliedindependentlyto differentsub-samples ofX-ray sourcesare (i)the galaxySEDs of <cit.>, (ii)the hybrid QSO/galaxy templatespresented by <cit.> and (iii) thetemplates developed by Tasnim Ananna et al.(in prep.)for the determination of photometric redshifts for X-raysourcesinthe Stripe82Xsurveyfield<cit.>. Extinction isadded to thetemplates asa free parameterusing the <cit.> andthe <cit.> attenuationlaws.The photometry usedfor X-ray sources indifferent wavebands corresponds to the “total” source flux estimates in the corresponding catalogue. For the VIDEO and the VHS surveysin particular we use either Kron or Petrosian-typemagnitudes(appropriateforresolvedsources),or fluxesintegrated withinfixed-size apertures(corrected forloses because of the size of the Point Spread Function), which are listed in the VSA databases.Before estimating photometric redshifts we explore potential systematic offsets amongphotometric bands, which may arise becauseof e.g.variations inseeingconditions asa functionof observingtimeandwavelength,differences inthedefinitionof `total' magnitudein differentcatalogues.Suchzero-point offsets among wavebands areestimated by LePhare using atotal of 2463 spectroscopicallyconfirmedgalaxies fromtheVIPERS-DR1 <cit.>. Theredshift isfixedandthe best-fit template foreach source is found. Photometric offsets are then estimated foreach waveband to minimizethe differences between the modeltemplates and theobserved magnitudes.Theseoffsets are then applied to the observedphotometry prior to the determination of photometric redshifts.Spectroscopically confirmed X-ray selected AGN are not theoptimal sample for this calculationbecause of potential variability issuesaffecting different photometric bandsobserved at separate epochs. In the case of optically-unresolvedX-ray sources (class_star > 0.8) we usethe hybrid QSO/galaxy templatelibrary of <cit.>for X-rayfluxesf_X >10^-13ergs^-1 cm^-2, and thetemplates ofTasnim Ananna etal.(in prep.) for fainter X-ray sources.The quality[The quality of photometric redshifts is quantifiedby the rms scatter, σ_NMAD,ofthequantity |z_spec-z_phot|/(1+ z_spec)and theoutlier fraction,η, definedas η= |z_spec - z_phot|/(1+ z_spec) > 0.15.] of the photometric redshiftmeasurements israther poorwith σ_NMAD=0.07and outlierfractionη=27%. Thesenumbers donotimprovefor sources with near-infrared information fromthe VIDEO or VHS surveys. <cit.> showed that for broad-line X-ray selected QSOs with bluefeatureless continua,narrow-bandphotometry aswell assome handle on the source variabilityare essential to reduce η below the10% level. Withoutthis additionalinformationand forthe spectralbandsused inthiswork,<cit.> estimatean expected catastrophicfailure redshift rateof η≈20-40%, similar to what is found here.Another potential issue relates to the separationbetween opticallyextended andpoint-like sourcesusing morphologicalinformationfrom seeing-limitedground-basedimages. <cit.> for example, underlinedthe importance of high spatial resolution imaging frome.g.the Hubble SpaceTelescope to identify opticallyunresolvedX-ray sourcesandapplythe correctsetof templates to the different mophologicalclasses.The poor quality of thephotometric redshiftestimatesforoptically unresolvedX-ray sourcesis mirroredbythecorresponding photometricredshift ProbabilityDistribution Functions(PDZs),which aretypically broad. Itis found for example,that for the majority(89%) of the optically unresolvedX-ray AGN with availablespectroscopy, the true redshift(i.e.spectroscopic)lies withinthe5th and95th percentiles of theestimated PDZ distribution.Wewish to propagate the uncertainties in thephotometric redshift estimates of point-like X-ray AGNin theanalysis. Thereare at leasttwo waysto achieve that.Thefirst is to usedirectly the PDZs ofindividual sources. The secondexploits thefact that mostof theoptically unresolved XMM-XXL-N X-ray sources have spectroscopic redshift measurements (e.g. 580out911 inthehard-bandselected sub-sample;seeTable <ref>), and therefore theredshift distribution of the population is well constrained (see Fig.<ref>).We may thereforeassumethatoptically unresolvedX-raysourceswithout spectroscopic redshiftsfollow thesame redshiftdistribution, dN(z)/dz, asthe spectroscopically confirmed partof the population inthe XMM-XXL-Nfield. Thislatter approachisadopted inthe analysis that follows. We note however, that our resultson the AGN space density (Section <ref>) donot change if instead we adopt the PDZs of individual sources. This is because the broad PDZs produced by LePhare are representative of theoverall level of uncertainty inthe determinationof photometric redshiftsfor X-ray sources with point-like optical-light profiles.Figure <ref>plots the spectroscopicredshift histogram ofX-rayAGNthatareunresolvedintheopticalbands. This distribution is fitwith the function dN(z)/dz ∝exp[-(z-z̅)^2/2 σ_z^2],wherez̅ andσ_z arefree parameters(redcurve inFigure<ref>). Spectroscopically unidentified andoptically unresolved X-ray sources are assigned a redshift PDZ that follows the relationabove. Wecautionthat theopticallyunresolvedX-ray sourceswithoutspectroscopic redshiftsextendto fainteroptical magnitudes (median r≈ 22 mag) compared to the spectroscopicallyconfirmed part ofthe population(median ≈ 20.5 mag).Nevertheless, we findthat the redshift distribution of the optically unresolved X-ray sourceswith secure redshifts is not a strong function of magnitude.Themedian and 68th percentiles of the distributionincreasefromz=1.4^+0.8_-0.6 forsourceswith r<21.5 mag to z=1.7^+0.6_-0.7 for r>22 mag.These numbers arebroadly confirmedusingthe recentCOSMOS-Legacy X-raysource catalogue <cit.>.We selecta total of about 100 X-ray sources from that sample with relatively bright X-ray fluxes f_X(2-10keV) >10^-14 ergs^-1 cm^-2 similarto the XMM-XXL-Nlimit,optically faintcounterpartsr>22.0 magand unresolvedopticallightprofiles.Theredshift(photometricor spectroscopic)distribution ofthissubset hasamedian and68th percentiles z=1.5^+0.4_-0.3.X-ray sources with extendedoptical profiles (CFHTLenS parameter class_star<0.8)arelikelytohavebroad-band optical/near-infrared emission dominatedby stellar light with little contaminationfromthe centralengine. Forthese sourcesnormal galaxytemplatesmayprovideanadequaterepresentationofthe broad-bandSEDs.Experimentationshowed thatthebest photometric redshift resultsfor this classof sources are obtainedby adopting the <cit.> normal galaxytemplate library for sources with X-ray fluxesf_X< 10^-14ergs^-1 cm^-2 and the hybridQSO/galaxy templates developedby TasnimAnanna etal.(in prep.)forbrighter sources. For optically extendedX-ray sources withadditional near-infraredphotometryfrom theVIDEO surveywe estimate photometric redshiftquality measures σ_NMAD = 0.06 and η=4.5%.Figure <ref> explores further the quality ofthe photometricredshifts for X-rayAGN byplotting the quantity (z_phot- z_spec)/ (1 +z_spec) as afunction of spectroscopic redshift and r-bandoptical magnitude.The catastrophicredshift failurerate doesnot appeartochange with opticalmagnitude, atleastto thelimit r≈23 mag,where sufficient numbersof spectroscopically confirmedsources is available. Thefractionofcatastrophicredshiftfailuresalso appears tobe stable with redshiftat least outout to z≈1. Inthe analysis thatfollows opticallyextended X-raysources with VIDEOnear-infraredphotometryareassignedthePDZsdetermined directly by the LePhare code.The availabilityof deep infraredphotometry in additionto optical data is key for goodquality photometric redshift measurements in the caseof opticallyextended X-raysources.Forthe subsetof this population thatlies outside theVIDEO near-infrared surveyarea we estimate photometric redshift qualitymeasures σ_NMAD = 0.06 andη=16%. TheVHS surveynear-infrared photometry,which is available over the entireXMM-XXL-N field, is shallower (Ks≈20 mag) thanthe VIDEO survey(Ks≈24 mag), and does notsubstantially improvethe photometricredshifts of optically-extended X-raysources.Also, therelatively high catastrophic redshift fraction is notrepresented in the broadness of thePDZs estimatedby theLePhare. Only abouthalf ofthe optically extended X-ray sources outside the VIDEO area with available opticalspectroscopy havePDZs with5th and95th percentilesthat bracket the truespectroscopic redshift. Therefore forthis class of sources we choose a different approach for propagating the photometric redshiftuncertainties inthe analysisthat follows. Figure <ref> showsthe distribution ofthe quantity |z_phot-z_spec| /(1+z_spec)for thesampleof spectroscopicallyconfirmed andopticallyextended XMM-XXL-NX-ray sourcesoutsidethe VIDEOarea,wherethe parametersz_phot, z_specintherelationabove arethephotometricandthe spectroscopic-redshiftmeasurements, respectively. The distribution of this quantitycan be represented with a Lorentzianthat has wings whicharesufficientlybroadandconsistentwiththeestimated catastrophicredshift fraction,η=16%.Inthe analysisthat follows uncertainties andpotential systematics affecting photometric redshiftmeasurementsforopticallyextendedX-rayselectedAGN without VIDEOphotometry are compensatedfor, at least tothe first approximation, byusing PDZs derivedfrom the Lorentzianplotted in Figure <ref>.In Figure <ref> we explore further the quality of the photometric redshiftestimates foroptically extendedlight profile and without VIDEOphotometry. There is no strongtrend of increasing η atfaint opticalmagnitudes, althoughthe numberof spectroscopically confirmed X-ray AGN beyond r≈22.5 mag drops substantiallyandthereforesmall numberstatisticsaffectany conclusions.The fraction of catastrophic redshift failures in Figure <ref> appearsto increase at redshiftsz1.We note howeverthe small number ofspectroscopic redshift measurements of X-raysources withextended opticallight profilebeyond z=1, which has an impact on any conclusions. In addition to optically identifiedX-ray sources in the sample there arealso 2402-8 keVdetections intheXMM-XXL-N withoutsecure counterpartsinthe CFHTLenS(12%ofthe sample;seeTable <ref>). Figure <ref>presentsthe X-rayflux distribution ofthese sources. The majorityhave f_X( 2 -10 keV) >10^-14erg s^-1 cm^-2. We explorethe expectedredshiftdistribution ofthispopulationusing the2^2-wideCOSMOS-Legacy survey<cit.>. We select COSMOS-Legacysurvey X-ray sources with f_X( 2- 10 keV) >10^-14 ergs^-1cm^-2and faintoptical counterparts, r>24.5 mag,i.e. close to limitingmagnitude of the CFHTLenSsurvey dataused inthis paper. We usespectroscopic or photometric redshifts for these sources listed in <cit.>. The resultingredshift distribution is shownin Figure <ref>. Most sourcesareinthe intervalat z=1-3. Wechoosetoassign XMM-XXL-Nsourceswithoutoptical counterpartsa weakpriorthatis flatinthe redshiftinterval z=1-6.Thebreakdownofredshiftmeasurements,photometricor spectroscopic,for the2-8 keVselectedXMM-XXL-N sampleare presented in Table <ref>. §.§ X-ray spectral analysis TheX-rayspectraofindividual sourcesareextractedfollowing methods described in <cit.>.Events with patterns up to 4 and 12for thePN andMOSdetectors, respectively,are included.The relevant ARFand RMFcalibration files aregenerated usingthe SAS tasks arfgen and rmfgen. Because of the similar responses of theMOS1 and MOS2 CCDs,the spectra from thosetwo detectors are coadded into a single spectrum.TheBayesian X-rayAnalysis package<cit.> is used to fit the X-ray spectra of individual sources. The adoptedmodel includes(i) componentsthat accountfor photoelectricabsorption andComptonscattering fromobscuring material along theline-of-sight that is distributedwith a toroidal geometryclosetothecentral supermassiveblackhole,(ii)an independent soft component, which isoften observed in the spectra of obscuredAGN <cit.> and could beattributed to ThomsonscatteringoftheintrinsicX-rayemissionoffionised material within thetorus opening angle <cit.>, (iii)Compton scattering (reflection) of radiation ondense material outside the light-of-sight. We adopt the torus model of <cit.> to approximatethe transmitted spectrumof an AGNthrough obscuring material.Theintrinsic spectrumis assumedto followa power-law parametrised bythe spectralindex Γ. The <cit.> model assumes asphere of constant density with two symmetricconical wedges with vertices at thecentre of the sphere removed. The opening angle of the cones is fixed to 45and the viewing angle of the observer is set to the maximum allowed in the model,87,i.e. nearlyedge on. Thesoftcomponentis approximatedbya power-lawmodelwiththe samespectralindex, Γ,astheintrinsic power-lawspectrum. Thereflection componentismodelledwiththe pexravmodelpresentedby <cit.>. Thespectral indexof that componentis thesame as that of the intrinsic power-law spectrum. The adoptedmodel (torus +zpowerlw +pexrav inxspec terminology)has fivefree parameters,the slopeof theintrinsic power-law spectrum, Γ, theline-of-sight column density of the obscurer, N_H, thenormalisations of theintrinsic power-law, soft scattering and reflection components.The adopted spectral model is atrade-off between areasonable representation of thebasic AGN X-rayspectral componentsfound inlocal samplesand arelatively smallnumber offree parameterstobe constrainedfrom datawith typically lownumber of photons. <cit.>also showed that the adoptedspectral components arethe minimum neededto represent the X-ray properties of AGN in the Chandra Deep Field South.We impose thatthe normalisations ofthe soft power-lawand reflection componentscannot exceed10% ofthe intrinsicpower-law spectrum. The redshift of the soft power-law and reflection component is tied to that ofthe torus model. We use aGaussian prior forthe spectral index Γwith mean 1.95and standarddeviation 0.15 <cit.>. Thehydrogen column density ofthe line-of-sight obscurationisassignedaflatpriorinthelogarithmicrange log_10 (N_H/ cm^-2)=20- 25.The sourceredshift is fixed to the spectroscopicvalue if available.In thecase of photometric redshift estimates the corresponding PDZs are used as priors. The XMMPNandMOSbackgroundspectrumisfitwiththemodels constructed by <cit.> that include an empirical instrumental component <cit.>and anastrophysical component <cit.>. The energyrangeusedin thespectral analysis is 0.5-8 keV. The output of thespectral analysis using theBXAareposteriorprobability distributionfunctionsinthe multidimensional space of the spectral fit free parameters.These are then converted to posterior chains in the parameter space of intrinsic AGNluminosity [logL_X( 2 -10keV)], hydrogencolumn density (log N_H) and redshift (z). Figure <ref> plots the distribution of total (source and background) spectralcounts in the0.5-8 keV band ofthe XMM-XXL-N X-ray sources. The medianof this distributionis 63photons. For completeness alsoshown inFigure <ref> isthe source only (i.e. after subtractingthe expected background counts) spectral counthistogram.Wenotehowever, thatwhenanalysing theX-ray spectraof individualsources thebackground isnotsubtracted but fitted to thedata as a separate modelcomponent.The uncertainties ofthe derivedparameters (e.g. log N_H)depend onthe energy distribution of the photons (i.e. overall spectral shape) and on the accuracy of the adoptedredshift measurements (i.e.spectroscopic vs photometric).Examples of confidence intervals for the log N_H and log L_Xofindividual sourceswith spectroscopicredshift measurements are presented inFigures <ref> and <ref> of Section <ref>.Figure <ref> presents thedistribution on the intrinsic X-ray luminosity (i.e. correctedfor line-of-sight obscuration) vs redshift space ofthe XMM-XXL-N X-raysources in comparison withthe Chandra surveysinthe COSMOS<cit.>,ExtendedGrowthStrip <cit.>and4 MsChandra DeepFieldSouth <cit.>. Forthe Chandrasurveyfields the data-points arebased on thespectral analysisresults of <cit.>.This figure demonstrates the complementarity of the XMM-XXL-N sample to Chandra survey fields. § X-RAY LUMINOSITY FUNCTION DETERMINATION Thissectiondescribes ourmethodologyfordetermining theX-ray luminosity functionof AGN. This isdefined as thespace density of the populationat a given redshift (z),X-ray accretion luminosity in the2-10 keV energy band [L_X(2-10 keV)], and obscuration along the line-of-sight parametrisedby the column density of neutral hydrogen (N_H).The likelihoodof a particular luminosity function parametrisationgivena setofobservationsisdescribed bythe product of the Poisson probabilities of individual sources.ℒ( d_i| θ) =e^-λ×∏_i=1^N∫dlog L_Xd V/d zdz dlog N_H× p(d_i | L_X,z, N_H) ϕ(L_X,z, N_H | θ), where dV/ dz is the comovingvolume per solid angle at redshiftz, d_i signifies the dataavailable for source i and θ representsthe parameters of the luminosity function model,ϕ(L_X, z,N_H | θ), that areto be determined. The multiplication is overall sources, N,inthe sampleandtheintegrationis overredshift,X-ray luminosityand hydrogencolumndensity. Thequantityp( d_i|L_X,z, N_H) is the probability of a particular source having redshift z, X-ray luminosityL_ X and line-of-sight hydrogen column density N_H.This capturesuncertainties in the determinationof allthree parametersbecause ofe.g.photometric redshifterrors oruncertaintiesin theX-rayspectra, underthe assumptionthatthe adoptedspectralmodelprovides areasonable representation of the basicAGN X-ray spectral properties <cit.>. In equation <ref>,λis the expected number ofdetected sources in a surveyfor a particular set of model parameters θ λ=∫dlog L_Xd V/d zd zdlog N_H×A(L_X,z,N_H)ϕ(L_X,z,N_H | θ). where, A(L_X,z,N_H) isthe sensitivity curve, which quantifies the survey area over which a source with X-ray luminosityL_X,redshiftzandcolumn densityN_Hcanbe detected. Anon-parametric approach for the determinationof the AGN spacedensityis adopted. Athree-dimensionalgrid inredshift, luminosity andcolumn densityis defined andϕ(L_X, z, N_H) is assumedto be constant within eachcube pixel with dimensions (log L_X±dlog L_X, z±dz,log N_H±dlog N_H). The valueof the AGN spacedensity in each gridpixel is determined via equation <ref>. The edges ofthe grid pixels in each of the three dimensionsare log L_X = (41.0,42.0, 42.5, 43.0, 43.5, 44.0,45.0, 46.0,47.0), z=( 0.0,0.5, 1.0, 1.5, 2.0,6.0), log N_H= (20.0, 22.0,23.0, 24.0, 25.0). The total number of free model parameters is 160.Importance sampling<cit.> is used toevaluate the integrals inequation <ref>.Foreach sourcewe draw(log L_X, z, log N_H) points fromthe equal probability chains produced by theX-ray spectralanalysis step.The luminosityfunctionis then evaluated for eachsample point, (L_X, z, N_H). The integral of equation <ref> is simplythe average luminosity function of the sample. The HamiltonianMarkov ChainMonte Carlocode Stan[<http://mc-stan.org/shop/>] <cit.> is used for Bayesian statistical inference. Finallywe definethe quantityN_obs, theobservednumber of sourceswithin ineach binof the3-dimensional gridin redshift, luminosity and column density <cit.>N_obs =∑_i=1^N 1/ℵ_i ∫dlog L_Xd V/d zdz dlog N_H× p(d_i | L_X,z, N_H) ϕ(L_X,z, N_H | θ),where the integration limits correspond to the edges of each grid pixeldefined above.The summationis over all sourcesin the sample.Thenormalisation factor ℵ_iis the integralof the quantity p(d_i|L_X,z,N_H)ϕ(L_X,z,N_H | θ)overthe full range of luminosities, redshifts and column densities.N_obs isusedforthevisualisationof theresults. Onlybinswith N_obs>1 are shownin the relevant figures andtables.Bins with N_obs<1 typically have largeerrors and therefore the observational constraints on the space density of AGN are loose. § RESULTS§.§ X-ray AGN obscuration distribution A key aim of this work isto estimate the fraction of obscured AGN at the relatively bright X-rayluminosities sampled by the wide-area and shallowXMM-XXL-Nsurvey. Thissample includesanon-negligible fraction of AGN with hydrogen column densities in excess of N_H = 10^22cm^-2.Thisis shown in Figure <ref> (left panel), which plots the observedlog N_H histogram of the full XMM-XXL-N hard-bandselected sample.Weconstruct this distribution using two different approaches. Thefirst is summing up the posterior probability distributionfunctions of individualsources produced by the X-ray spectral analysis.Thismethod however, does not take into account the form of theAGN X-ray luminosity function, i.e.the fact thatluminous sources arerarer thanmoderate luminosityones.We therefore useequation <ref> to estimatethe observed number of sources, N_obs,as a function of thehydrogen column density. ThiscalculationweighstheN_Hposteriordistributionof individualsources withthevalue ofthe correspondingluminosity function. The histogram ofthe quantityN_obs providesa more representativeview ofthe observedN_H distributionofthe AGN populationdetectedintheXMM-XXL-N.Thefractionofobserved sources with column densities N_H > 10^22cm^-2 is about 35%. Compton-thickAGNwith N_H>10^24 cm^-2 represent asmall fraction, ≈2%, ofthe observed population in theXMM-XXL field.Figure <ref> alsoshows the log N_H distributionof of the sample splitinto AGN with optically unresolved(middle panel)andoptically-extended (rightpanel) counterparts.As expectedX-ray sources with unresolved (point-like) counterparts are mostly associated with low hydrogen column densities, N_H> 10^22cm^-2.AGN identified with optically-resolved galaxieshaveaflatterlogN_Hdistribution. Selected examples ofindividual XMM-XXL-N obscuredAGN are alsopresented in Figure<ref>.TheSDSS opticalspectra inthis figure show narrow opticalemission lines (e.g.[OII] 3727 Å, Hβ, [OIII] 5007 Å) and/or absorptionfeatures (e.g. H+K break, Balmer lines)associatedwiththeAGNhostgalaxy. Thecorresponding posteriorprobabilitydistributions inaccretionluminosityand hydrogen column density inferredfrom the X-ray spectral analysis are also shown in Figure <ref>. Within theXMM-XXL-N sample thereare alsobroad optical emission-line AGN(type-1) that show evidencefor X-ray obscuration, N_H10^22 cm^-2. Such sources represent about 10% of the type-1 AGNsample in the XMM-XXL-N survey<cit.>. Examples ofthis classofAGN areshown inFigure <ref>. The X-ray obscuration in these systems may be associated with dust-free material close to the central black hole and within the broad-line region <cit.>. Figure <ref> and Table <ref> present the spacedensity ofAGN as a functionof 2-10 keV X-ray luminosityfor thedifferent redshiftand hydrogen column densityintervals defined insection <ref>.Results are shownonly up to the redshiftbinz=1.0-1.5.Athigher redshiftsthe XMM-XXL-Nspace densityconstraintssufferlargeuncertaintiesbecauseofsmallnumberstatistics,increasing photometricredshiftuncertainties andthefraction of opticallyunidentified sources in thesample, which areexpectedto beassociated preferentiallywith moderate-andhigh-redshift AGN.Further observations are needed to mitigate theseissues.Deepnear-infrared photometry(e.g.VIDEOsurvey depths)overthefull XMM-XXL-Nfieldforexample, willallowbothidentificationsandimprovedphotometricredshiftestimatesforobscuredand highredshift AGNat z1.5. Dedicated follow-upspectroscopic surveys of optically faint (e.g. r22.5 mag) X-raysources inthe XMM-XXL-N arealso needed to improveconstraints onthe AGN space density, particularly for obscured sources, at moderateand highredshift.In Figure <ref>the non-parametricconstraints from the XMM-XXL-N fieldare also compared with previousestimatesoftheX-ray luminosityfunctionofAGN<cit.>. Overall there isgood agreementwithpreviousstudies.ForCompton-thinAGNin particular,theshallow XMM-XXL-N survey providescompetitive constraints on the AGNspace densityto L_X( 2-10 keV)≈10^45 ergs^-1,i.e.close toand above the knee of theX-ray luminosity function.ForCompton-thicklevelsofobscuration(N_H>10^24 cm^-2) we find AGN space densities consistent with the recent workof<cit.>.Thisisalsobroadly consistentwith<cit.>inthe caseoflowredshift Compton-thickAGNselected inthe 60-month BAT<cit.>all skysurvey.Alsoshown aretherecent constraints from <cit.>on the luminosity functionof Compton-thick AGN using the 70-month Swift-BAT all-sky survey hardX-raycatalogue.The<cit.> spacedensities arelowerthan previousstudies, including those from<cit.>. Thisdiscrepancyissurprising giventhatthetwo studiesfindverysimilar observed fractionsof Compton thick sourcesamong the totalAGN sample.The source ofthe differenceis likely relatedto theX-ray spectral models adoptedto determine the intrinsic luminosityof heavily obscuredsources and the method usedto extrapolate fromtheobserved numberof Compton-thicksourcesto theparentpopulation. Our constraints are also systematically higher than thoseof<cit.>. Currentestimatesof thespacedensityofCompton thickAGN will improveby adding highenergy observations,>10 keV (e.g. NuSTAR), to theexisting X-ray spectroscopyat observedenergies belowabout 10 keV providedby XMM orChandra.Wealsoexplore thefractionofobscuredCompton-thin AGNasa function of luminosityand redshift.We define thisfraction as the ratio ofthe space densitiesof AGN in thecolumn-density intervals N_H= 10^22-10^24 cm^-2and10^20-10^24cm^-2.Theresults arepresented inTable <ref> and are plottedin Figure <ref>.We findthat the obscured Compton-thin AGNfraction in theluminosity interval logL_X ( 2-10keV)/erg s^-1=43.5-44 increasesfrom 46±_-12^+11%atameanredshiftz=0.25to 75±_-5^+5%at ameanredshiftz=1.25.Atbrighter luminosities, L_X (2 - 10keV) >10^44 ergs^-1, this fraction isfound tobe about 35%independent ofredshift. Figure <ref> alsocompares ourresults with recentconstraints in the literature<cit.>. Withinthe uncertaintiesthere isreasonable agreement with previousestimates.Itis alsoworth emphasisingthe smaller uncertainties of the XMM-XXL-N data points at L_X (2 - 10keV) 10^44 ergs^-1 comparedto previousstudies.This demonstrates thecomplementarity of deep Chandra and wide/shallow XMMsurveys in studies of thedistribution of AGN in obscuration over a wide luminosity baseline.§.§ Comparison with infrared-selected AGNNextit is investigatedif thetotal spacedensity ofX-ray AGN, includingobscuredsources,isconsistentwithconstraintsfrom infrared-selected samples. Like in the X-ray, AGNselected at these longerwavelengths arelessprone totheeffects ofobscuration, albeit with non-negligible levels of contamination by non-AGN sources. For this exercisethe X-ray luminosity function isintegrated in the range N_H =10^20 - 10^24 cm^-2 (i.e.only Compton-thin sources) toavoid current uncertaintiesin the determinationof the space densityof Comptonthick AGN.Wealso adopt themost recent resultson the spacedensity ofinfrared-selected AGNpresented by <cit.>. Theyfit galaxyandAGNtemplates tothe multi-waveband spectralenergy distributionof Herschel infrared-selectedgalaxies in theGOODS-South <cit.> and the COSMOS fieldstoidentifythose withastatisticallysignificantAGN component.These are then used to constrain the space density of AGN, including heavilyobscured systems, asa function ofbolometric AGN luminosityinferred fromthe SEDfits.Weuse theparametric AGN luminosityfunctions atdifferent redshiftsprovided by <cit.> to interpolate to the mean of the three redshift intervals adopted in this paper. Bolometric luminosities are converted to X-ray luminosities using thebolometric corrections of <cit.>.Theseresults are plottedin Figure <ref>.Thereis broadagreement betweenthe X-rayluminosityfunction of Compton-thin X-rayselected AGNand that ofinfrared-selected ones, given the currentlevel of uncertainties in thedetermination of AGN space densities, and systematicsassociated with e.g.the conversion frombolometric toX-rayluminosities. Additionally,the levelof agreement betweenthe X-ray and infraredluminosity functions argues againstalarge populationofheavilyobscuredsources thatare identifiedat the infraredand aremissing fromX-ray wavelengths. The additionof Comptonthick AGN tothe X-rayluminosity function constraintsin Figure<ref> wouldfurtherstrengthen this argument. Assuming forexample a flat 30% fractionof Compton thick AGN independentof redshift andluminosity <cit.> would shift the X-ray curves and data-points in Figure <ref> upward by about 0.15 dex.Next we explore thecompleteness of infrared colour-selection methods proposedin theliterature toidentify obscuredandluminous AGN, using asstarting point theXMM-XXL-N AGNsample.Figure <ref>plotsthedistribution ofXMM-XXL-NAGNinthe 2-dimensional planethat consistsof WISE W_1-W_2infrared colour andX-rayluminosity. Differentpanelscorrespondtodifferent redshift and hydrogencolumn-density intervals.The density contours inthat figureare constructedusing theredshift,luminosity and columndensity probabilitydistribution functionsderivedfrom the X-ray spectralanalysis of individualsources.A singlesource may therefore spread out in differentluminosity and redshift bins. Iso-density curves correspond to the 68th and 95th percentile of the distribution. We adopt the colour limit W_1-W_2>0.8 defined by <cit.> for selecting AGN.Formally their selection also includes a magnitude cut in theWISE W2 filter, W2<15.05 mag. Thisis primarily driven by the depthof the WISE observations in theCOSMOS field, which was usedto calibratethe WISEAGNselection, andthe requirementto minimise contamination. Our work uses thedeeper AllWISE catalogue. Additionally,contaminationis notanissuesinceall theX-ray sources in thesample are AGN.Figure <ref>shows that the fraction of sources thatpass the cut W_1-W_2>0.8 changes primarily with redshift.The impacts of obscuration and X-ray luminosity appear tobesecond-ordereffects. We caution,nevertheless,thatthe luminosity baseline of our sample is not large.The fraction of X-ray selected AGNabove the WISEcolour cut proposedby <cit.> increases from about 20-30% atz<0.5 to >50% at z>1 with some variationamongcolumndensitybins atfixedredshift. Similar conclusionsapply to theAGN mid-IRselection criteriaproposed by <cit.>. Figure <ref>showshow theXMM-XXL-N X-rayselected AGNare distributedon Spitzermid-IR colour-colour plane relativeto the selection wedgeproposed by <cit.>. The fraction ofXMM-XXL-N AGN within that wedgeincreases from about 20-30% at z≈0.5 to 60-70% at z≈1.5.These fractions are not particularly sensitive to intrinsic AGN X-ray luminosity cuts. §.§ Comparison with UV-selected AGNIn this section the space density of unobscured X-ray selected AGN is comparedtothatofUV-selectedQSOs.IntheX-rayliterature unobscuredAGNare oftendefinedasthosewith columndensities N_H<10^22 cm^-2.Recentevidencesuggests thatthe consistency betweenoptically-classified type-1AGN andX-ray unobscured onesis maximisedat a lowerthreshold, log N_H=10^21.5cm^-2<cit.>. Thislimit is adoptedheretocomparetheluminosityfunctionsofX-rayand UV/optically-selectedAGNsamples.ThespacedensityofX-ray unobscuredAGN (i.e. logN_H/ cm^2=20-21.5) isthen approximated bysimply scaling theestimated space densitiesin the logarithmic interval log N_H/ cm^2 = 20 - 22by a factor of 1.5.The underlyingassumption is a flat distributionof AGN in the logarithmic column density interval log N_H/ cm^2 = 20 - 22. Forthe luminosityofUV/optically-selected QSOsweuse theLEDE (Luminosity andDensityEvolution) parametrisationof <cit.>, which is constrained by observations in the redshift interval 0.4-1.5. The conversionof QSO absolute optical magnitudes to X-ray luminosities follows the steps described in <cit.>. Atlower redshift,z<0.4, wealsocompare ourresults withthe broad-lineAGN luminosityfunction presentedby <cit.>. From the latterstudy we use the doublepower-law parametrisation of the redshift z=0 bolometricluminosity function.This is converted toX-rays usingthe<cit.> bolometriccorrections. Following<cit.>puredensityevolutionoftheform (1+z)^5is alsoincludedtodetermine thebroad-lineAGN luminosity function at redshiftsz>0.The comparison between X-ray andoptical type-1 AGNluminosity functionsis presentedin Figure <ref>. Overall thereis fairagreement betweenthe space densitiesofunobcured (logN_H/cm^2= 20-21.5)X-ray selectedAGN andUV/optically selectedQSOs.Atlowredshift the <cit.>luminosity functionappears toexceedthe X-ray space densities of unobscured AGN belowabout L_X( 2 - 10 keV) ≈10^43ergs^-1.A similar trendhas been reported by<cit.> when comparingtheir luminosityfunction with thesoft-band (0.5-2 keV)X-ray luminosityfunction of <cit.>.A possible explanationis that the faint-end of the<cit.>luminosity functionincludesa fractionof Seyfert-1.8sand 1.9s, i.e. not purelytype-1 AGN. These sources correspond to a higher X-ray column density threshold, e.g.log N_H/ cm^2≈22.3 <cit.>.Differences in e.g. theadopted bolometric correctionsor the definitionof type-1 AGN may also contribute to the difference between the faint-end of the <cit.> and X-ray luminosity functions. §.§ eROSITA predictions Finally we explore expectations from the eROSITA All Sky Survey on the determinationof theAGN spacedensity asa functionof redshift, accretionluminosity and line-of-sighthydrogen columndensity. For this exercise we assume (i) that the AGN population follows the median space densityconstraints of <cit.>,(i) the uncertainty ofthe luminosityfunction scalesasδϕ(L_X,z, N_H)/ ϕ(L_X,z, N_H)∝1/√(N), whereNis theexpected number of AGN in a given (L_X, z, N_H) bin, (ii) the 4-year depth of the eROSITA All Sky Survey in thesoft-band, f_X(0.5 - 2 keV )=1.5 ×10^-14 ergs^-1cm^-2 <cit.>, over a total area of 5000 deg^2, (iii) the X-rayspectral modeldescribed insection<ref> with Γ=1.9andthenormalisations ofthesoftpower-lawand reflectioncomponents fixedto10% and1%respectively, ofthe normalisation of the intrinsic power-law spectrum.The resulting eROSITApredictions are plotted in Figure <ref> along with thenon-parametric constraints of the AGN space density from <cit.>and the XMM-XXL-N.This figure showsthat theeROSITAwill provideexcellentstatistics forAGN populationstudies at redshiftsz 0.5 overa rangeof X-ray luminosities, L_X(2-10 keV) ≈ 10^43 - 10^45erg s^-1, and for column densities approaching the Compton thick limit, N_H ≈10^24 cm^-2.Wechoosenot toprovide eROSITA predictions for Compton-thick AGN because current measurements remainsomewhatuncertainandthe detectabilityofsuchheavily obscured sourcesby theeROSITA All SkySurvey is sensitiveto the adopted AGN X-rayspectral model, and in particularthe level of the soft scattering component <cit.>.At redshiftsz>0.5 theeROSITA AllSky Surveywill besensitive to moderateobscured AGN,N_H 10^23 cm^-2, atthe bright-endof the luminosityfunction L_X( 2- 10 keV)10^44ergs^-1. Athigher levels of obscurationsonly the mostextremesources intermsofluminosityare expectedtobe detected, L_X ( 2 - 10keV)10^45erg s^-1.We cautionthat the eROSITApredictions plottedin Figure <ref>onlydependontheexpectednumberofAGNin different (L_X, z, N_H) bins.They do not account for uncertainties in the determination ofredshifts (e.g. via photometric methods), the measurement errorsof the line-of-sight column densityfrom the X-ray spectra or uncertainties in the determination of the AGN space density at the bright end of the luminosity function from current surveys.§ DISCUSSION This paper presentsconstraints on the space densityof obscured AGN at relativelyhigh accretion luminosities, L_X( 2 -10 keV) ≈ 10^44erg s^-1, using one of the largest contiguous X-ray surveyscurrently available,in the equatorialXMM-XXL field. We showthat despite the relatively shallowX-ray depth <cit.>,this sample providesrobust estimatesof the spacedensityof Compton-thin(N_H<10^24 cm^-2) AGNat luminosities close toand above the break ofthe luminosity function <cit.>,where smallerarea X-ray surveysareaffected bysmallnumberstatistics. This pointis demonstrated inFigure <ref>, fromwhich it canbe inferred that the XMM-XXLimproves by a factor of four thenumber of AGN with L_X( 2- 10keV)>10^44 ergs^-1and z<1.5 compared to the combined Chandra COSMOS, AEGIS-XD and CDFS-4Ms samples presented by<cit.>. For heavilyobscured, Compton-thick AGN,the XMM-XXL-Nprovides constraintsonlyin thecase ofvery luminous sources, i.e.L_X( 2 - 10 keV) >10^45erg s^-1at z>0.5.Additionalobservations atrest-frame energies >10 keV arealso needed to complementexisting X-ray spectroscopy belowabout 10 keVandconfirm thehighlevels ofline-of-sight obscuration ofthese systems. Thereforethe Compton thickAGN space density constraintsfrom the XMM-XXL-Nsample should beviewed with caution. We estimate a Compton-thinobscured AGN fraction of ≈0.35 for luminosities log L_X ( 2 - 10 keV) = 44 - 45 erg s^-1 independentofredshift toz≈1.5. Atsomewhatfainter luminosities [log L_X ( 2 - 10 keV) = 43.75 erg s^-1] thereisevidence thattheobscuredAGNfraction increaseswith redshift from 0.45±0.10 atz=0.25 to 0.75±0.05 at z=1.25. Similar resultsare claimed by<cit.> who showthat the Compton-thinAGN fractionisacomplex functionofaccretion luminosity andredshift.Athigh accretion luminosities[log L_X (2- 10 keV)> 44 erg s^-1] feedbackprocess associated with AGN winds may be responsible for clearing dust and gas clouds in the vicinity of supermassive black holes and hence, lowering the obscured AGN fraction.At lower accretion luminosities [log L_X ( 2- 10 keV)< 44 ergs^-1] AGNfeedback is likely subdominant.The increase ofthe obscured AGN fraction with redshift at lower luminositiesmay be associated with theoverall increase in the dust content of galaxies with redshift, which in turn is linked to the higher specific star-formation rates of galaxies at earlier epochs <cit.>.We also show that infrared surveys that identify AGN via template fits to the observed multi-wavebandSpectral Energy Distribution <cit.>estimateAGNluminosity functionsthatare broadlyconsistentwiththespace densitiesofCompton-thinAGN determined in the XMM-XXL-Nat high accretion luminosities [log L_X ( 2- 10 keV) 43erg s^-1].Thiscomparison is limited by the current levelof random uncertainties in the AGN space densitiesat bothX-ray andinfrared wavelengthsand thelevel of contribution ofCompton thick AGNto theX-ray luminosity function. Despitethese points thelevel of agreementbetween X-ray andinfrared AGNluminosity functionsis evidenceagainsta large populationofobscuredsources thatisidentifiedbyinfrared selection methodsbut is missing at X-raywavelengths.For example, in Figure <ref> there is reasonable agreement between the space densityof Compton thick AGN candidatesestimated using either X-ray or mid-infrared <cit.>selected samples at z≈1. Similar conclusionsare alsopresented by <cit.>bycomparing X-rayconstraintswithinfrared selectedsamples ofCompton thickAGN<cit.> to redshift z≈2. Despite the current level of agreement betweenthe space densities of X-rayand infrared selected AGN, apopulation of deeply buried sourcesthat remains unidentified at both wavelengths cannot be excluded.It is furthershown that selecting AGN usingmid-IR colour cuts only <cit.> leadstoredshift-dependent incompleteness.Nevertheless, atz1 these methods are efficient and relatively complete (≈50-70%) in compiling luminous AGN samples including heavily obscured systems.<cit.> used AGN and galaxytemplate Spectral Energy Distributionsto investigate the redshift-dependent efficiencyof the AGN selection basedon the WISE W1-W2 colour.They showedthat suppression of the AGN mid-infrared emissionrelativetothe hostgalaxy,eitherbecauseofdust extinction or dilution by stellarlight, is an issue for redshifts z1.5.<cit.> showedthat the exact redshift dependence of theW1-W2 colour-selection efficiencydepends on thelevels of extinction and dilution ofthe AGN radiation.For example, templates withAGN emissionfractionof50% inthewavelength range0.1-30 μ m lie below thecolor cut W1-W2=0.8 used to select AGN at allredshifts belowz=1.5.An AGNfraction inthe wavelength range 0.1-30 μ m of75% reduces the efficiency of selecting AGNvia thecolour cutW1-W2=0.8only inthe relativelynarrow redshift rangez≈ 0.5-1.Theredshift dependent incompleteness in Figure <ref> is likely related to the above effects,withlower redshiftAGNbeingmorediluted thanhigher redshiftsources. Interestinglytheleveloftheline-of-sight obscuration measured by log N_H appears to be a second order effect on theincompleteness fractionsof Figure <ref>. This may because atfixed redshift moreobscured sources in theflux limited XMM-XXL-N sampleare also expected to havehigher X-ray luminosities andthereforealargercontrastof theintegratedAGNemission relative to the host galaxy. Applying a hydrogen column density cut log N_H/cm^2 = 21.5 to the X-ray sample we findfair agreement with the luminosity functions ofUV/optically-selectedQSO, particularlyatz0.5.Thisis consistent withindependent claims that thecolumn density threshold thatmaximises theagreement betweenthe X-rayunobscuredand the UV/optically-selected broad-line QSO classes is log N_H/cm^2 = 21.5 <cit.>.The XMM-XXL-N sample also includes asmall fraction<cit.> ofAGN withbroad optical emission lines and evidence for X-ray obscuration higher than N_H 10^22 cm^-2 (seeFig. <ref>).This fractionis similarto previousestimatesin theliterature <cit.>. Thesesources likely includea fraction of type-1.8 or earlier Seyferts, which at least in the local Universe are associated, onthe average, withhydrogen columns in excessof N_H= 10^22 cm^-2<cit.>.Theclassof Broad AbsorptionLine (BAL)QSOs isknown tobe X-rayunderluminous for theirUVemission likelybecauseofX-rayobscuration alongthe line-of-sight but also, in few cases, because of an intrinsically weak X-raycontinuum<cit.>.The fraction of BAL features among optical/UV-selected QSOs can beas high as 26%<cit.>. <cit.>studied theincidence ofBAL troughsamong X-ray selectedAGNintheXMM-LSSsample andreportedafractionof 7±5%, i.e. lower thanoptical/UV-selected QSOs but comparable to the fractionof XMM-XXL-N broad-line QSOs withX-ray absorption N_H 10^22 cm^-2 <cit.>. Searching for BAL featuresin theoptical spectra ofthe present X-raysample is beyond the scope of thispaper.Nevertheless, it is interesting that one of thetype-1 and X-ray absorbed sourcesin Figure <ref>(α=02:05:41.3,δ=-0.4:37:07.4, z=0.7220) showsevidence for anabsorption trough bluewardof the Mg II broad emission line.Finally, we make predictions on the AGN space density constraints that the eROSITAAll Sky Survey <cit.> candeliver.We show thatthis missionwillprovidea nearlyunbiasedcensus ofAGN, including heavilyobscured systems approachingcolumn densities N_H ≈10^24cm^-2,at relatively lowredshift z<0.5 and for accretion luminosities L_X 10^43erg s^-1. This samplewill be aunique resource for studyingAGN demographics andpopulation propertiesat z<0.5,i.e. inthe last5 Gyrs of cosmic time.At higher redshifts eROSITA will place unique constrains on the bright-endof the luminosity function (L_X 10^44erg s^-1) and thefraction of moderately obscured sources among such luminous AGN.§ ACKNOWLEDGEMENTS We thank the anonymous referee for useful comments and suggestions. This work benefitedfrom the thales project 383549 that is jointly funded bythe European Union and the Greek Government in theframework ofthe programme“Education andlifelong learning”.We acknowledge supportfrom theFONDECYT Postdoctorados 3160439 (JB)and the Ministry of Economy,Development, and Tourism's Millennium ScienceInitiative through grant IC120009,awarded to The Millennium Instituteof AstrophysicsMAS (JB). Fundingfor SDSS-III has beenprovidedby theAlfred P. Sloan Foundation,the Participating Institutions,the National ScienceFoundation, and the U.S. Department of Energy Office of Science.The SDSS-III web site is http://www.sdss3.org/.SDSS-IIIis managedbytheAstrophysical Research Consortium for the Participating Institutions of the SDSS-III CollaborationincludingtheUniversityof Arizona,theBrazilian Participation Group,Brookhaven National Laboratory,Carnegie Mellon University, University of Florida, the French Participation Group, the GermanParticipationGroup,HarvardUniversity, theInstitutode Astrofisica de Canarias, theMichigan State/Notre Dame/JINA ParticipationGroup,JohnsHopkinsUniversity,LawrenceBerkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute forExtraterrestrial Physics, NewMexico State University, NewYorkUniversity, OhioStateUniversity,PennsylvaniaState University,Universityof Portsmouth,PrincetonUniversity,the Spanish Participation Group, Universityof Tokyo, University of Utah, Vanderbilt University,University of Virginia,University of Washington, and Yale University.mnras
http://arxiv.org/abs/1704.08296v1
{ "authors": [ "A. Georgakakis", "M. Salvato", "Z. Liu", "J. Buchner", "W. N. Brandt", "T. Tasnim Ananna", "A. Schulze", "Yue Shen", "S. LaMassa", "K. Nandra", "A. Merloni", "I. D. McGreer" ], "categories": [ "astro-ph.HE", "astro-ph.GA" ], "primary_category": "astro-ph.HE", "published": "20170426185539", "title": "X-ray constraints on the fraction of obscured AGN at high accretion luminosities" }
Conducting polymers have become standard engineering materials, used in many electronic devices. Despite this, there is a lack of understanding of the microscopic origin of the conducting properties, especially at realistic device field strengths. We present simulations of doped poly(p-phenylene) (PPP) using a Su-Schrieffer-Heeger (SSH) tight-binding model, with the electric field included in the Hamiltonian through a time-dependent vector potential via Peierls substitution of the phase factor. We find that polarons typically break down within less than a picosecond after the field has been switched on, already for electric fields as low as around 1.6 mV/Å. This is a field strength common in many flexible organic electronic devices. Our results challenge the relevance of the polaron as charge carrier in conducting polymers for a wide range of applications.§ INTRODUCTION Conducting polymers have emerged as important materials for a number of very diverse applications such as, e.g., fuel cells, field effect transistors, organic light-emitting diodes (OLED), solar cells, computer displays, thermoelectric films, and in microsurgical tools. <cit.>. Materials made from conducting polymers have numerous engineering-friendly properties – they are often easily synthesised, air stable, solution processable, flexible, and environmentally friendly.In several of the applications mentioned above, the charge transport properties are of central importance <cit.>. Charge transport, and charge carrier mobility in particular, is one of the fundamental physical processes that determine device performance. Considering how widely spread organic conductors are today in society – many people now use touch screens every day – the understanding of these charge carriers is surprisingly schematic, and existing models in use claim different physical origins of observed features.<cit.>. Improved insight into the underlying physical processes is vital for improving device performance. Due to the large electron-phonon coupling in these one-dimensional systems, charge is thought to be transported mainly in the form of polarons and (possibly) bipolarons, in which trapped charges self-localise together with an associated structural distortion. The dynamics of these charge carriers is complex and depends on, e.g.,temperature, electric field, disorder, and system dimensionality <cit.>. Here, we investigate theoretically charge transport as a function of electric field strength in the conducting polymer poly(p-phenylene) (PPP), also known as poly(1,4-phenylene). PPP was discovered in 1979<cit.> and is recognised as a useful high-performance polymer due to its thermal <cit.> and chemical <cit.> stability and its electrical and optoelectronic properties <cit.>.PPP has a very simple structure consisting of interconnected phenyl rings and therefore also serves as an archetypal example of a conducting polymer. PPP can also be viewed as an ultrathin graphene nanoribbon. From a theoretical perspective, a lot of work on fundamental aspects of charge transport in conducting polymers have been performed on polyacetylene <cit.> (PA). The present investigations employ a generalization of the Su-Schrieffer-Heeger (SSH) Hamiltonian <cit.>,with an electric field introduced through a time-dependent vector potential <cit.>. Additional complexity and various interactions not included in this model are certainly present in real conjugated polymer systems. Though the size and morphology of polymer films in real devices is more complex than what is presented in this work (a single chain), a bottom-up approach is of special interest for understanding the underlying physical processes, determining the relation between the motion of the excitations and the electric field strength. In devices made from conducting polymers, the strength of the applied electric field varies depending on the type of device. In thermoelectric applications, the field is of the order of μV/Å <cit.>, whereas in optoelectronic devices, field strenghts of typically a few mV/Å are common.<cit.>.We have used these experimental values as a guideline in the present work. Specifically, we address field strengths in the range 1 μV/Å– 5 mV/Å. Our results indicate that polarons break down already at very moderate field strengths, common in devices. Also the bipolarons show significant instability in the upper range of the addressed electric field strengths. However, we would like to emphasize that our conclusions are drawn from the linear isolated chain. A chain which is not perfectly aligned with the electric field will of course feel a different electric field which would change the break-down value of the field. As regards the isolated chain approximation, hopping between chains seems to further lower the stability of polarons <cit.>.§ THEORETICAL MODEL AND COMPUTATIONAL DETAILS The system addressed throughout this work is a PPP polymer consisting of 20 six-membered rings with periodic boundary conditions. To describe the effect of an electric field on the fundamental excitations in a doped polymer, we study the time evolution of the Hamiltonian H_ SSH= H_ el+H_ latt, H_ el=-∑_n(t_0-α(u_n+1-u_n))(e^iγ Ac_n^† c_n+1+e^-iγ Ac_n+1^† c_n), H_ latt=1/2∑_nK(u_n+1-u_n)^2+1/2∑_nMu̇_n^2, which is a generalization of the original SSH Hamiltonian to account for the presence of an electric field <cit.>.In the above Hamiltonian, H_ el is the electronic part and H_ latt is the lattice part. H_ el contains, in addition to hopping terms, the electron-phonon coupling terms. The parameter t_0 is the hopping integral between nearest-neighbor carbon sites, α is the electron-phonon coupling constant for the π electrons, u_n is the displacement of the n-th CH group<cit.> from its equilibrium position, c_n^† (c_n) is the creation (annihilation) operator for a π electron on the n-th site, K is the elastic constant associated with the σ bonds and M is the mass of the CH group.The electric field is introduced in terms of a time-dependent vector potential, A, via Peierls substitution of the phase factor <cit.>. The parameter γ is defined as γ=ea/ħ c with e, a, c being the absolute value of the electronic charge, the lattice constant and the speed of light, respectively. The electric field, then, is given by the time derivative of the vector potential, i.e. E=-(1/c)∂ A / ∂ t.We use periodic boundary conditions to avoid end-point effects. This also allows us to study the dynamics of the excitations for a longer period of time, without having to use an unnecessarily large system. The periodic boundary conditions implies that the position of the polaron (bipolaron), after it forms, can be anywhere along the chain, depending on the initial conditions. If one removes the periodic boundary conditions, the polaron (bipolaron) localised in the middle of the chain has the lowest ground state energy <cit.>.A time dependent electric field uniform in space can be described with a vector potential that is independent of the space coordinates (a scalar potential describing the same electric field will not have this property). Therefore, introducing the electric field through a vector potential in the form of a complex phase factor to the phase integral is compatible with periodic boundary conditions.In our simulations, we have set the parameters appearing in the Hamiltonian to the following values: t_0=2.5 eV,K=21 eVÅ^-2, α=4.1 eVÅ^-1,a=1.22 Å andM=1349.14 eVfs^2Å^-2.These values are the same as those commonly used for polyacetylene (PA) <cit.>. Our choice is motivated by the fact that the backbone of PPP is essentially identical to that of PA. Additionally, this set of parameters not only reproduce the electronic properties of PPP in zero field (measured energy gaps in short PPP oligomers<cit.>), but it also allows us to compare our results with those of PA. The PA structure using the same tight-binding Model has the energy level and band gap according to Ref.[ su1979solitons, PhysRevB.22.2099, rossi1991minimum], while using the same set of parameters for PPP, as first order of approximation, would give a different electronic structure and band gap in our calculations.In principle, the band gap can be fine-tuned by introducing additional parameters for the electron-phonon coupling in aromatic rings <cit.>. However, such fine-tuning can be expected to have a relatively small effect on the overall charge carrier dynamics and therefore we have not included them in the present work.More elaborate models – with, e.g., the electron-electron interaction included together with the already mentioned additional parameters for the electron-phonon coupling in aromatic rings – will be the subject of future work.The static initial-state geometry of the charged polymer is determined with the electric field set to zero, allowing polarons and bipolarons to form. By minimizing the ground state total energy of the system using the Hellman-Feynman force theorem within the adiabatic approximation with zero electric field, one arrives at u_n=1/2(u_n+1+u_n-1)+2α/K∑_k'(ψ_k^*(n)ψ_k(n+1)-ψ_k^*(n-1)ψ_k(n)), where ψ_k(n) is an eigenfunction at site n. The sum is over all occupied states. The static initial-state geometry is then computed self-consistently by starting with an initial guess for the displacements, and then repeatedly applying equation (<ref>) until convergence, using the constraint that the total length of the system is constant <cit.>. In order to study the dynamics of the geometry-optimised charged excitations, we switch on the electric field at t=0 and linearly raise it to its maximum during the first 50 fs of the simulation. With this gradual onset of the electric field, we avoid disturbances due to sudden switching <cit.>.In principle, switching on the electric field before optimizing the doped system could be used to address processes in which the electrons are, for example, hopping from another chain into the system. However, there are studies for polyacetylene showing that applying a field above a certain value before optimizing the structure leads to nonspecific patterns rather than forming polarons or bipolarons.<cit.> Such transport processes are deemed out of scope in the present work. The evolution of the system is computed by solving the time-dependent Schrödinger equationiħ∂ψ(n,t)/∂ t=(-t_0+α(u_n+1-u_n))e^iγ Aψ(n+1,t)+(-t_0+α(u_n-u_n-1))e^-iγ Aψ(n-1,t), and the equation of motion for the lattice displacement Mü_n(t)=F_n(t)=-K(2u_n-u_n+1-u_n-1)+ α∑_k'e^iγ A(ψ_k^*(n)ψ_k(n+1)-ψ_k^*(n-1)ψ_k(n))+H.c.simultaneously. In the equation of motion, equation (<ref>), the forces F_n(t) are derived from the total potential, i.e. the sum of the electronic potential and the lattice harmonic potential. The coupled differential equations are solved numerically using the procedure of Ono and coworkers <cit.>, briefly outlined below. The solutions of the time-dependent Schrödinger equation are ψ_k(t)=Te^-i/ħ∫_0^tĥ(t^')dt^'ψ_k(0), where ĥ(t)is the single particle Hamiltonian and T the time ordering operator. To compute this expression numerically, time needs to be discretised. We choose the time step Δ t to be 0.025 fs, which is very small on the scale of the bare phonon frequency ω_Q=√(4K/M) of the system. With this choice, the variation of the Hamiltonian within a time step can be assumed to be negligible, and the electronic wave function can be writtenψ_k(t_j+1)=e^-iĥ(t_j)Δ t/ħψ_k(t_j). By expanding the electronic wave function in terms of the eigenfunction (ϕ_l) and eigenvalues (ϵ_l) of the single-particle Hamiltonian ĥ(t) at each time step, the wave function becomes ψ_k(n,t_j+1)=∑_l [∑_pϕ_l^*(p) ψ_k(p,t_j)] e^-iϵ_lΔ t/ħϕ_l(n). This set of coupled equations can be numerically integrated using the following algorithm:u_n(t_j+1)=u_n(t_j)+u̇_n(t_j)Δ t, u̇_n(t_j+1)=u̇_n(t_j)+F_n(t_j)/MΔ t ,resulting in the pertinent time-dependent electronic wave functions.To analyze the motion of the excitations, the polaron and bipolaron positions and velocities need to be computed. The position of the excitation is defined by considering the center of mass x_c for the excess charge density ρ_n, taking the periodic boundary conditions into account.<cit.> Thus,x_c=Nθ/2π,<cosθ_n> ≥ 0and<sinθ_n> ≥ 0,N(θ+π)/2π,<cosθ_n> ≤ 0,N(θ+2π)/2π,otherwise, where <cosθ_n>=∑_nρ_n cos(2π n/N),<sinθ_n>=∑_nρ_n sin(2π n/N), θ=arctan( <sinθ_n>/<cosθ_n>), and the excess charge density ρ_n is given byρ_n(t)=∑_k'|ψ_k(n,t)|^2-1.From the computed x_c, the velocity is calculated as an average velocity over 400 time steps according to v(t_j)=x_c(t_j)-x_c(t_j-400Δ t)/400Δ t.As already mentioned, our model does not include the effects of electron-electron correlations. These effects can be implemented through the on-site and off-site Hubbard terms, similar to what has been done for PA <cit.>. Alternatively, the density matrix renormalization group (DMRG) could be used for addressing the interactions <cit.>.§ RESULTS AND DISCUSSIONOur simulations reveal that the behavior of the charge carriers depend sensitively on the electric field strength. There is a weak field regime with well-localised polaronic sonic or supersonic states, and a strong field regime, where the charge and lattice degrees of freedom decouple. In Table <ref> we summarise our computed critical field strengths. Interestingly, we find that polarons break down already at around 1.6 mV/Å. This is a field strength present in many flexible electronics devices. Below, we describe the polaron and bipolaron properties as a function of field strength in more detail. We begin with the static regime, i.e. when the electric field is zero. The static regime is the starting point in all our simulations.Schematic diagrams of the pertinent energy levels in the system are shown in Fig. <ref>a. The leftmost panel shows the neutral case (PPP^0). Here, each level is doubly occupied up to the Fermi energy. H stands for the highest occupied level, and L for the lowest unoccupied level. The middle panel shows the singly doped case (PPP^+), in which one electron has been removed. Two polaronic levels appear in the gap, of which the lowest one is singly occupied and the higher one unoccupied. Finally, the rightmost panel shows the bipolaronic case (PPP^2+). Here, two electrons have been removed. Compared to the polaronic case, the emerging levels appear deeper in the gap and they are both unoccupied.The calculated density of states (DOS) of PPP^0, PPP^+ and PPP^2+ polymers are shown in Fig. <ref>b-d, respectively. We see that the energy gaps and the shapes of the DOS for PPP^0 and PPP^+ are the same as in previous studies <cit.>. For the bipolaronic case, we find that the gap narrows to 2.98 eV while it is 3.44 eV for the singly charged case. Compared to the polaronic state, the bipolaron is more localised and deeper in the gap. In Fig. <ref>e-g, the charge densities of the three cases are visualised. For PPP^0, we show the charge density of the highest occupied level, whereas for PPP^+ and PPP^2+, we show the lowest in-gap level which represents the (bi)polaronic states. The radii of the blue spheres at each site correspond to the charge distribution of this level at that site. We see that the polaron extends over about six rings, whereas the bipolaron is about four rings wide. Our calculations thus confirm that the wave function of the bipolaron is more localised than that of the polaron, which agrees with the observed deeper gap states in the DOS (Fig. <ref>d). It is relevant to compare our results with a recent DFT study <cit.> where a density functional with pure Hartree-Fock exchange in the long-range and pure Perdew-Burke-Ernzerhof (PBE) exchange in the short range was employed. In these calculations, the dihedral angles were also optimised, giving an interring rotation of about 39^∘ in the neutral polymer, and an oscillating interring rotation ranging from around 22^∘ to50^∘ inside the polaron. The long-range exchange tail allows a polaron to form. At the same time, shifts of the energy levels in and around the polaron due to the excess charge is accounted for. Due to these shifts, the gap states are no longer symmetrically positioned within the gap. The band gap in the neutral polymer in the DFT-based computation<cit.> appears to be about 6 eV, which is significantly larger than the band gap in our model (3.64 eV), what experimental data for short isolated oligomers suggest<cit.>, as well as what recent GW computations find (3.95 eV)<cit.>. The discrepancy between the typical experimental value of the gap in PPP (2.8 eV) and the band gap in our model (3.64 eV) warrants a few comments. The value 2.8 eV refers to the optical gap, and is determined from ultra-violet (UV) spectral measurements on a PPP film<cit.>, i.e., for PPP molecules not in vacuum. Interestingly, GW calculations give that when a PPP chain is adsorbed on graphene at a distance of 4.0 Å, the gap is renormalized from 3.95 eV to 2.7 eV<cit.>. Thus, it appears that the environment heavily influences the value of the gap, and calculations for an isolated PPP chain cannot be compared directly with measurements on a PPP film, solution, or matrix. The optical spectrum of a conjugated polymer typically also has a wide absorption band due to intrinsic disorder, which leads to significant uncertainty in the determination of the optical band gap. Furthermore, the electronic gap, which is the gap our model refers to, is very difficult to measure directly and is larger than the optical gap due to excitonic interaction. Finally, we also mention that in our model, the interring dihedral angles are assumed to be zero in all cases, which may somewhat affect the band gaps.The visible changes in the bond lengths after doping the system in the mentioned DFT study <cit.> covers approximately six rings which compares well with our results for the polaron. In the DFT study, the bipolaron could not be stabilised, due to Coulomb repulsion. In general, including Coulomb repulsion in the SSH model is expected to weaken the bipolaronic state or cause it to become unstable, depending on the Coulomb interaction parameters.We now turn to the real time dynamics of polarons as a function of electric field strength. In Fig. <ref> we show the time evolution of the lattice distortions, excess charge and polaronic-state wave function of p-doped PPP after applying 0.25, 2 and 5 mV/ electric fields.The weak field is chosen such that the charge and lattice distortions are coupled throughout the simulation time (up to 2 ps). The intermediate field is above the threshold for which the charge and the lattice distortions decouple. Finally, the strong field demonstrates how the polaron breaks down already at a very early stage in the simulation, just t=49 fs after initialisation. Figs. <ref>a and <ref>b show the time evolution of the displacements, y_n=u_n+1-u_n, and the excess charge ρ_n(t), defined in equation (<ref>), for a polaron in a 0.25 mV/ electric field.The blue circles (red filled diamonds) show a snapshot of the calculated quantity at t=0 (t=130 fs).After 130 fs, the positively charged polaron has moved to the end of the chain in the direction of the applied field. Clearly, the displacement and the excess charge are coupled to one another and move as one entity. This remains the case for the entire simulation time, up to 2 ps.Figs. <ref>e and <ref>f are similar to Figs. <ref>a and <ref>b but for an electric field of 5 mV/. In this rather strong electric field the polaron charge cloud decouples from the lattice deformations. As one can see from panels <ref>e and <ref>f, after only 49 fs the excess charge density is spread through the polymer and is thus completely detached from the lattice deformations.For electric fields strengths just above the stability threshold, here examplified iwth the field strengths 2 mV/, the polaron breaks down in a similar way, but it takes longer time. This is illustrated in Figs. <ref>c-d. For long simulation times, we observe that polarons may occasionally reform and break down again.Figs. <ref>g-j illustrate the corresponding eigenstates, showing the distribution of the charges on the atomic sites and their movements after applying the electric field. In our nearest-neighbor tight-binding model, the electric field forces the charge to hop between the nearest sites along the direction of the field. The size of the sphere around each site is proportional to the wave function amplitudes of that site at a given time. The excitations in panels <ref>h (weak field), continue to move as an entity along the chain even for very long simulation times.This picture does not hold for 2 and 5 mV/ fields, where the polaron dissociates. This is clear from panels <ref>i, where the polaron after about 300 fs under 2 mV/ electric field is delocalized. The localized charge spreads quickly (within only 49 fs) over a large part of the chain in 5 mV/ electric field. Similar to the charge dynamics in polyacetylene (PA) <cit.>, we observe charge induced lattice deformation at later times after the initital polaron breakdown. Fig. <ref> exemplifies the polaron velocity at three different fields strengths:1 μV/Å, 0.1 mV/Å, and 0.5 mV/Å.For all these field strengths the charge and the lattice are not decoupled and therefore the definition of the velocity (equation (<ref>)) holds. As one can deduce from Fig. <ref>, the polaron velocity depends on the strength of the electric field in a highly nonlinear way. For the very weak field 1 μV/Å, which is of the same order as the electric field in a thermoelectric device, the pumped energy via electric field does not move the polaron along the polymer chain, throughout our simulation (up to 2 ps).When the field strength is increased but kept in the sonic regime, we observe significant oscillations in the velocity of the polaron, i.e. the polaron reaches a maximum speed and slows down – in fact almost stops – repeatedly. This regime is the result of coupling between charge and the acoustic phonons. For other fields in this regime, the velocities oscillate approximately around the same saturation velocity (not shown).Finally, in the supersonic regime, where the charge instead couples to the optical phonons <cit.>, the polaron velocity reaches a higher saturation value compared to sonic regime. In our simulations, the saturation velocity is attained after about 130 fs. After this short period of time, the energy pumped into the system by the electric field dissipates to the lattice vibrations at the same rate, so that the acceleration of the polaron becomes zero. The dissipation is continuous due to the classical description of the lattice in the SSH model. The saturation velocity in the supersonic regime is about three times larger (around 0.7 Å/fs) than the maximum velocity in the sonic regime (around 0.25 Å/fs). For other fields in the supersonic regime, the saturation velocity is similar to the one shown. The corresponding velocities for PA <cit.> are about 0.45 Å/fs in the supersonic regime and 0.13 Å/fs in the sonic regime. Since the sound velocity, defined as v_s=(a/2)√(4K/M) within this model, is similar in PA and PPP (due to the use of the same set of parameters) we believe the difference originates from the different geometries of these two polymers. According to our simulations, the threshold electric field between the sonic and supersonic regimes lies between 0.15 mV/ and 0.155 mV/. This is slightly larger than what has been theoretically predicted for PA (0.135 mV/–0.14 mV/).<cit.>.In addition, we have also calculated the potential, electronic, kinetic and total energy differences, i.e. the energy at time t subtracted by its counterpart at t=0, of the system. Our calculations show that the total energy of the system is increasing with time. This is expected, since the system is not connected to any external heat bath. In the weak field regime, the potential energy (1/2)∑_nK(u_n+1-u_n)^2 is in phase but with opposite sign to the electronic energy ∑_l'∑_k'C_k,lϵ_l. Therefore, the energy pumped into the systems via the electric field goes through the electrons to the lattice. In other words, in the weak field regime the potential energy increases and the electronic energy decreases, whereas in the strong field the electronic energy increases.We finally also briefly discuss our results for bipolarons.Studies for PA suggest that the exists a range of on-site Hubbard parameters <cit.> for which the bipolaronic state is more stable than two separate polarons. Therefore the bipolaronic state warrants consideration. In the present simulations, we neglect electron-electron correlations, as mentioned in the method section. Therefore, our bipolaron results may be considered as the limit of maximum coupling.Fig. <ref> shows the bipolaron dynamics for selected field strengths.In panels a and b, the lattice displacements and excess charge for PPP^2+ in a weak field are shown. Compared to the polaron case, the displacements around the charge are localised on fewer sites and are greater in magnitude.Figs. <ref>c-d illustrate bipolaron breakdown in a strong electric field (10 mV/). Here, the bipolaron dissociates after about 190 fs. Compared to the polaron case, a significantly stronger electric field is needed to decouple the charge from the lattice distortions. However, if we were to include electron-electron correlations in the model, the bipolaronic state would become less stable, and the break-down field would in any case be lower than in the present calculation.The bipolaron velocity for three representative field strengths is shown in Fig. <ref>. Just as for the polaron, the bipolaron velocity oscillates in the sonic regime (blue dots). Some initial oscillations are also discernible in the supersonic regime (green dots). The bipolaron velocity in the supersonic regime is about three times higher than for polarons. § CONCLUSIONS In conclusion, we have demonstrated that polarons in fact appear to be relatively unstable in an electric field. For normal device field strengths, we find that they quickly dissolve and delocalize over the polymer. For electric field strengths just above the break-down field 1.6 mV/Å, the polarons occasionally localize and then dissolve again. The results are obtained by simulation of the SSH Hamiltonian for PPP, which is an archetypal conducting polymer. Also the bipolarons in our model break down at relatively moderate electric field strengths. The reliability of our model is ensured by comparing the PPP electronic structure for both the neutral and charged cases with available data.Our results challenge the common view that polarons are central for the charge transport in many types of devices based on conducting polymers. Polaronic states can be detected using a variety of techniques, e.g., time- and wavelength-resolved pump-probe measurements, Raman spectroelectrochemistry, single-molecule fluorescence spectroscopy (SMS), photoinduced absorption (PIA) spectroscopy, and electron spin resonance (ESR) spectroscopy. We hope that the results presented here will inspire additional such experiments, explicitly addressing how the signal is affected by an external electric field. § ACKNOWLEDGEMENTWe acknowledge financial support from Vetenskapsrådet (VR), The Royal Swedish Academy of Sciences (KVA), the Knut and Alice Wallenberg Foundation (KAW), Carl Tryggers Stiftelse (CTS), Swedish Energy Agency (STEM), and Swedish Foundation for Strategic Research (SSF).The computations were performed on resources provided by the Swedish National Infrastructure for Computing (SNIC) at the National Supercomputer Center (NSC), Linköping University, the PDC Centre for High Performance Computing (PDC-HPC), KTH, and the High Performance Computing Center North(HPC2N), Umeå University.
http://arxiv.org/abs/1704.08519v1
{ "authors": [ "M. R. Mahani", "A. Mirsakiyeva", "Anna Delin" ], "categories": [ "cond-mat.mtrl-sci" ], "primary_category": "cond-mat.mtrl-sci", "published": "20170427115437", "title": "Breakdown of Polarons in Conducting Polymers at Device Field Strengths" }
[pages=1-last]main.pdf
http://arxiv.org/abs/1704.08273v1
{ "authors": [ "Ivy Bo Peng", "Roberto Gioiosa", "Gokcen Kestor", "Erwin Laure", "Stefano Markidis" ], "categories": [ "cs.DC" ], "primary_category": "cs.DC", "published": "20170426181057", "title": "Exploring the Performance Benefit of Hybrid Memory System on HPC Environments" }
Shenyang National Laboratory for Materials Science, Institute of Metal Research, Chinese Academy of Sciences, Shenyang, ChinaSchool of Materials Science and Engineering, University of Science and Technology of China, Hefei, China Department of Physics, Zhejiang Normal University, Jinhua, 321004, ChinaState Key Laboratory of Quantum Optics and Quantum Optics Devices, Institute of Laser Spectroscopy, Shanxi University, Taiyuan, Shanxi 030006, China Collaborative Innovation Center of Extreme Optics, Shanxi University,Taiyuan, Shanxi 030006, ChinaShenyang National Laboratory for Materials Science, Institute of Metal Research, Chinese Academy of Sciences, Shenyang, ChinaSchool of Materials Science and Engineering, University of Science and Technology of China, Hefei, ChinaThe corresponding author: [email protected] Department of Physics, Zhejiang Normal University, Jinhua, 321004, China 05.30.Jp, 71.36.+c, 03.75.Kk, 71.35.LkWe theoretically investigate a spinor polariton condensate under nonresonant pumping, based on driven-dissipative Gross-Pitaevskii equations coupled to the rate equation of a spin-unpolarized reservoir. We find the homogeneous polariton condensate can transit from the spin-unpolarized phase, where it is linearly polarized, to the spin-polarized phase, where it is elliptically polarized, depending on the cross-spin versus same-spin interactions and the linear polarization splitting. In both phases, we study elementary excitations using Bogoliubov approach, in a regime where the decay rate of total exciton density in reservoir crosses over from the slow to the fast limit. Depending on reservoir parameters, the global-phase mode can be either diffusive or gapped. By contrast, the relative-phase mode always possesses a gapped energy, undamped in the spin-unpolarized phase but weakly damped in the spin-polarized phase. In the spin-unpolarized phase, both modes are linearly polarized despite pumping and decay. However, in the spin-polarized phase, the mode polarization can be significantly affected by the reservoir and depends strongly on the circular polarization degree of the condensate. Interestingly, we demonstrate that the `ghost' branch of the Bogoliubov spectrum of the relative-phase mode can be visualized in the photoluminescence emission, distinguishable from that of the global-phase mode and thus allowing for experimental observation, when the spinor polariton condensate is elliptically polarized. Spinor polariton condensates under nonresonant pumping: Steady states and elementary excitations Zhaoxin Liang December 30, 2023 ================================================================================================§ INTRODUCTIONAt present, there are significant research interests in spinor exciton-polariton condensates in semiconductor microcavities <cit.>. Formed from strong couplings between excitons and photons, polaritons possess peculiar spin properties <cit.>: the J_z=± 1 (spin-up or spin-down) spin projections of the total angular momentum of excitons along the growth axis of the structure directly correspond to the right- and left-circularly polarized photons absorbed or emitted by the cavity, respectively. The motivation behind the interests in spinor polariton condensates is two-fold. First,a spinor polariton condensate is intrinsically nonequilibrium, with coherent and dissipative dynamics occurring on an equal footing <cit.>. This has resulted in numerous intriguing phenomena even in one-component polariton condensates <cit.>. Further account of the polariton spin degree of freedom and their dynamics have revealed exceptionally rich physics in polariton systems <cit.>, such as the stimulated spin dynamics of polaritons <cit.>, spin Meissner effect<cit.>, optical spin Hall effect <cit.>, spontaneous spin bifurcation <cit.>,and ferromagnetic-antiferromagnetic phase transitions <cit.> in spinor polariton Bose-Einstein condensate (BEC). Second, owing to the inherent spin multistabilities <cit.> and fast spin dynamics, semiconductor microcavities bring prospects of implementing novel solid-state optoelectronic spin-logic architectures <cit.>. First demonstration of polariton condensates as optical switches in state-of-the-art microcavity structure has been reported in recent experiments <cit.>. Building on this theoretical and experimental progress, polariton-based systems <cit.> may promise a novel platform for realization and investigation of many-body systems.Due to the spin structure of polaritons, the polariton-polariton interaction is strongly spin anisotropic <cit.>. In particular, the strength of the interaction constant g_12 for polaritons with opposite spin is typically much smaller than the interaction constant g for same-spin polaritons <cit.>. As a result, the polariton condensate is generally expected to be linearly polarized <cit.> in the absence of mechanisms that explicitly break the symmetry between the spin-up and -down components, such as magnetic field or spin-polarized pump. Still, a spontaneous magnetization of spinor polariton condensate has been observed which is induced by different loss rate of the linear polarizations <cit.>.Recently, several experiments have demonstrated thepossibility to tune the interaction constants using biexcitonic Feshbach resonance <cit.> in resonantly created polariton condensate and single quantum well. Theoretically, this inspires an interesting question as regards the behavior of spinor polariton BEC formed under non-resonant pumping when the relative strength between cross-spin versus same-spin interactions can be varied in a wide range, although tuning interactions in this case remains experimentally challenging.In this work, we theoretically investigate a spinor polariton BEC under nonresonant excitations in presence of linear polarization energy splitting denoted by Ω, assuming tunability of the spin-anisotropic interactions. First, we study properties of a homogeneous polariton condensate having a uniform density n_0. We find that there exist two steady-state phases, a spin-unpolarized phase and a spin-polarized phase, depending on the parameter η=g_12/[g+2Ω/n_0]: for η<1, the polariton condensate is in the spin-unpolarized phase, exhibiting a pinned linear polarization; for η>1, the condensate transits into the spin-polarized phase, where it becomes elliptically polarized. For the latter, whether the circular polarization is left or right handed is spontaneously chosen by the system. We note that our model assumes a spin-independent reservoir resulting from rapid spin relaxation, hence excludes the polarization transfer from the spin-polarized pump to the condensate as discussed in Refs. <cit.> using spin-polarized reservoir models. Moreover, the spontaneous creation of an elliptically polarized condensate in this work is induced by an interplay between interaction effects and linear polarization energy splitting, and occurs for η>1 which is beyond the typical experimental regime at present. This is different from Ref. <cit.> in the regime |g_12|≪ g, where the elliptical polarization is induced by different energy and dissipation rates of the linear polarizations. Second, we study elementary excitations in both phases of the spinor polariton condensate with the Bogoliubov-de Gennes (BdG) approach. Different from Refs. <cit.> in the context of equilibrium case and Ref. <cit.> for resonantly created condensate, we study elementary excitations in a nonequilibrium polariton BEC, where the reservoir effect can significantly modify the energy and polarization of the collectively excited modes. Different from previous work (see, e.g., Ref. <cit.>) which assumes fast relaxation <cit.> of the incoherent reservoir density, here we consider effect of reservoir in the entire regime where the decay rate of the reservoir density crosses over from the slow to fast limit compared to the system dynamics.We present detailed results on the energy spectrum and polarization of the global-phase mode and the relative-phase mode, corresponding to the global- and relative-phase excitations of the spinor components of the condensate, respectively. For energy spectra of excitations, we find that the global-phase mode is significantly affected by the reservoir. Depending on the reservoir parameters, the real part of the global-phase mode can be diffusive, gapped, even gapless. By contrast, the relative-phase mode always has a gapped real part of the energy spectrum, being undamped in the spin-unpolarized phase but weakly damped in the spin-polarized phase. For polarization of modes, we find that in the spin-unpolarized phase, both the global- and relative-phase modes are linearly polarized at all momenta, one copolarizing and the other cross polarizing with the linearly polarized condensate. This is similar as the equilibrium case <cit.>, and is a consequence of symmetry properties of BdG equations in the spin-unpolarized phase regardless of effects of dissipation. However, different from the equilibrium case, the mode polarization in the spin-polarized phase can be significantly affected by reservoir particularly at low momenta, and how it varies with momenta also depends strongly on the circular polarization degree of the condensate.Finally, we discuss how to probe the presented Bogoliubov dispersions of the spinor polariton BEC. Exploiting the photoluminescence (PL) emission, we show that both the negative-energy ghost dispersion branches of the spin mode and density mode can be directly observed in the PL spectrum, well distinguished from each other, when the polariton condensate is in the polarized phase. The rest of the paper is structured as follows. In Sec. <ref>, we present our theoretical model, based on which we solve for the stationary homogeneous state in Sec. <ref>. There, we identify two phases of the spinor polariton BEC and discuss polarization properties of condensates. In Sec.  <ref>, we present a comprehensive study on elementary excitations in both phases using the BdG theory, providing analytical and numerical results for the excitation spectrum and polarization properties of the excited modes. Then, in Sec. <ref> we discuss how to experimentally probe the presented excitation spectrum exploiting PL emission. We conclude with a summary in Sec. <ref>.§ MODEL We consider a spinor exciton-polariton BEC created under non-resonant pumping in presence of linear polarization energy splitting. The order parameter for the condensate is described by a two-component time dependent wavefunction Ψ=[ψ_1( r,t),ψ_2( r,t)]^T, written in the basis of left- and right-circular polarized states <cit.>. For the excitonic reservoir, we assume that the spin relaxation of the reservoir is sufficiently fast so that the reservoir on the relevant time scales can be modeled by a scalar density denoted by n_R(t). This way, we consider a situation when the reservoir does not explicitly affect the condensate polarization. Note that in realistic situations, the spin relaxation time of the reservoir is typically finite (see, e.g., Ref. <cit.>), which can be accounted with a reservoir model in terms of occupations of the left- and right-circular polarized components <cit.> rather than the total density.The dynamics of the polariton condensate can be described by the driven-dissipative two-component Gross-Pitaevskii equation <cit.>, i.e.,iħ∂ψ_1/∂ t = [-ħ^2∇^2/2m+g|ψ_1|^2+g_12|ψ_2|^2+g_Rn_R]ψ_1- Ωψ_2+iħ/2(R n_R-γ_C )ψ_1,iħ∂ψ_2/∂ t = [-ħ^2∇^2/2m+g|ψ_2|^2+g_12|ψ_1|^2+g_Rn_R]ψ_2- Ωψ_1+iħ/2(R n_R-γ_C)ψ_2.Here, m is the mass of the polariton, g (g_12) is the interaction constant between polaritons with same (opposite) spins, γ_C is the decay rate of condensate polaritons, and g_R characterizes the (spin-independent) interaction between the condensate and reservoir. The coherent spin flipping term Ω usually arises from the anisotropy-induced splitting of linear polarizations in the microcavity, as has been experimentally evidenced <cit.>. For concreteness, we assume Ω>0. We note that going beyond the Gross-Pitaevskii equations (<ref>) and (<ref>) to fully include the quantum and thermal fluctuations of the quantum field (e.g., Keldysh path-integral method <cit.>) is beyond the scope of this work. Furthermore, in writing the above equations, we have assumed the situation where the transverse-electric and transverse-magnetic splitting <cit.> vanishes.We consider Eqs. (<ref>) and (<ref>) are coupled to a (scalar) incoherent reservoir as mentioned earlier, described by a rate equation <cit.>, i.e.,∂ n_R/∂ t=P-γ_R n_R-R(|ψ_1|^2+|ψ_2|^2)n_R.Here, P is the rate of an off-resonant continuous-wave (cw) pumping, γ_R^-1 describes the life time of reservoir polaritons, and R is the stimulated scattering rate of reservoir polaritons into the spinor condensate.The properties of the spinor polariton BEC as stationary solutions to Eqs.  (<ref>)-(<ref>) are determined by the rich interplay among the effects of pumping and decay, linear polarization energy splitting, and the spin-dependent interaction. In typical polariton systems, one has g>0, g_12<0, and g≫ |g_12|<cit.>. In this case, polaritons driven by an off-resonant unpolarized pump tend to condense into a linearly polarized condensate <cit.>. Its polarization direction is random for Ω=0 as has been experimentally observed <cit.>, whereas a nonvanishing Ω will result in pinning of polarization <cit.>. Inspired by recent experimental progress in realizing tunable cross-spin interaction properties <cit.>, below we are theoretically interested in the polariton BEC governed by Eqs. (<ref>)-(<ref>) assuming g_12/g can be flexibly controlled in a wide range, i.e., extends also into regimes that remain experimentally challenging to access. § HOMOGENEOUS STEADY STATES Our goal is to seek the spatially homogeneous stationary solutions to Eqs. (<ref>)-(<ref>). We will use the pseudospin representation of the condensate because the condensate pseudospin vector S⃗=1/2(Ψ^†·σ·Ψ) with σ_x,y,z the Pauli matrices provides an experimentally measurable quantity <cit.>. Substituting an ansatz of the form Ψ_i( r,t)=e^-iμ_T t(ψ_1^0,ψ_2^0)^T=e^-iμ_T t(√(S+S_z),√(S-S_z))^T and n_R( r,t)=n^0_R into Eqs. (<ref>)-(<ref>), we obtainṠ_x =-(γ_C-Rn_R)S_x+2δ g S_zS_y, Ṡ_y =-(γ_C-Rn_R)S_y-2Ω S_z+2δ g S_zS_x,Ṡ_z = -(γ_C-Rn_R)S_z-Ω S_y, Ṡ = -(γ_C-Rn_R)S, ṅ_R = P-{γ_R+2RS} n_R.Here, we denote δ g=g-g_12. Setting the left side of Eqs. (<ref>)-(<ref>) to zero for stationary solutions, it follows from Eqs. (<ref>) and (<ref>) that a condensate can spontaneously form, i.e, S≠ 0, if P>P_th with P_th=γ_Rγ_C/R and n^0_R=γ_C/R. For condensate polarization, we see S_y=0 from Eq. (<ref>), but there exist two sets of stationary solutions for S_x and S_z according to Eq. (<ref>), i.e., S_z(Ω-δ g S_x)=0.Depending on whether S_z is zero or not in the stationary state, we identify two steady-state phases of the condensate, which we shall hereafter refer to as the spin-unpolarized phase and spin-polarized phase, respectively:*Spin-unpolarized phase: S_z=0, S_y=0, and S_x=S=n_0/2, corresponding to an X-linearly polarized condensate <cit.>, which exists under the condition g_12<g+2Ω/n_0 with n_0=(P-P_th)/(2γ_C). Correspondingly, we find μ_T=(g+g_12)n_0/2+g_Rn^0_R-Ω.*Spin-polarized phase: S_z≠ 0, S_y=0, and S_x=Ω/(g_12-g), corresponding to an elliptically polarized condensate, which existsunder the condition g_12>g+2Ω/n_0. Moreover, we obtain S_z=± (n_0/2)√(1-[2Ω/(g-g_12)n_0]^2), the sign being chosen randomly upon Bose condensation. Clearly, the circular polarization degree s_z=|S_z/S| is given by √(1-[2Ω/(g-g_12)n_0]^2). In addition, we find μ_T=gn_0+g_Rn^0_R. We emphasize that, in our model, the form of Eqs. (<ref>)-(<ref>) maintains the symmetry between the spin-up and -down polaritons, contrasting to Refs. <cit.> where such symmetry is explicitly broken in some fashion. The transition from the spin-unpolarized to spin-polarized phases occurs at a critical interaction g_12=g+2Ω/n_0. There, if a perturbation δ S_z is applied to the system Hamiltonian in form of λσ_z e^(ikx-ω t)+η t, with η→ 0^+, the linear response of the system, i.e., the spin density response function, can be obtained as χ_s ∝ 1/(g-g_12+2Ω/n_0), which diverges at the phase transition. We emphasize that while the spinor polariton condensate possesses a critical condition formally resembling the equilibrium counterpart <cit.>, there is a fundamental difference due to the open-dissipative nature of our system where the condensate density n_0 is determined by the balance of pumping and decay.§ ELEMENTARY EXCITATIONSThe goal of this section is to investigate elementary excitations in the above two phases using the Bogoliubov approach <cit.>. We start from the standard decomposition of the wave function (ψ_1,ψ_2,n_R)^T into the steady-state solution (ψ_1^0,ψ_2^0,n_R^0)^T and a small fluctuating term <cit.>, i.e., [ ψ_1( r, t); ψ_2( r, t) ] = e^-iμ_T t[ ψ_1^0; ψ_2^0 ][ 1 +∑_ k{[ u_1 k; u_2 k ] e^i( k r-ω t) +[ v^*_1 k; v^*_2 k ] e^-i( k r-ω^* t)}],andn_R( r, t)=n_R^0[1+∑_k{w_ ke^i( k r-ω t)+w^*_ ke^-i( k r-ω^* t)}].Substituting Eqs. (<ref>) and (<ref>) into Eqs. (<ref>)-(<ref>) and retaining only first-order terms of fluctuation, we obtain at each momentum k the Bogoliubov-de Gennes (BdG) equation ℒ_ k𝒰_ k=ħω_ k𝒰_ k. Here, 𝒰_ k=(u_1 k,v_1 k,u_2 k,v_2 k,w_ k)^T and the operator ℒ_ k in the matrix form reads asℒ_ k= ([h_1 gn_1^0g_12n_2^0-Ω√(n^0_2/n^0_1)g_12n_2^0 g_Rn_R^0+i/2Rn_R^0;-gn_1^0 -h_1 -g_12n_2^0 -g_12n_2^0+Ω√(n^0_2/n^0_1)-g_Rn_R^0+i/2Rn_R^0;g_12n_1^0-Ω√(n^0_1/n^0_2)g_12n_1^0h_2 gn_2^0 g_Rn_R^0+i/2Rn_R^0; -g_12n_1^0 -g_12n_1^0+Ω√(n^0_1/n^0_2)-gn_2^0 -h_2-g_Rn_R^0+i/2Rn_R^0; -iRn_1^0 -iRn_1^0 -iRn_2^0 -iRn_2^0 -i[Rn_0+γ_R] ]), with h_1(2)=ε_k^0+g_Rn_R^0+2gn_1(2)^0+g_12n_2(1)^0-μ_T, where ε_k^0=ħ^2k^2/2m. Solutions to Eq. (<ref>) provide full specifications of the elementary excitations in the spinor polariton BEC.As a consequence of dissipation, the Liouvillian matrix is non-Hermitian, and Eq. (<ref>) yields five complex dispersion branches: ω_j=Re(ω_j)+iIm(ω_j) (j=1,2,3,4,5), where the imaginary part represents the damping spectrum. Below, we present a detailed analysis on the energy of the Bogoliubov excitation modes, and their polarizations which may be accessed experimentally <cit.>, for the spin-unpolarized phase (see Sec. <ref>) and spin-polarized phase (see Sec. <ref>), respectively. We will focus, in particular, on the different behavior of modes in two phases, and the effects of reservoir in the crossover regime from γ_R≪γ_C to γ_R≫γ_C. §.§ Elementary excitations from the linearly polarized condensates For the linearly polarized condensate formed in the regime g_12<g+2Ω/n_0, we substitute the corresponding stationary values (see Sec. <ref>) into the BdG Eq. (<ref>), giving ℒ_ k= ([ h_1gn_0/2 g_12n_0/2-Ω g_12n_0/2g_Rn_R^0+i/2Rn_R^0; -gn_0/2-h_1-g_12n_0/2-g_12n_2^0+Ω -g_Rn_R^0+i/2Rn_R^0; g_12n_0/2-Ω g_12n_0/2 h_2gn_0/2g_Rn_R^0+i/2Rn_R^0;-g_12n_0/2-g_12n_0/2+Ω -gn_0/2-h_2 -g_Rn_R^0+i/2Rn_R^0;-iRn_0/2-iRn_0/2-iRn_0/2-iRn_0/2-i[Rn_0+γ_R] ]), with h_1=h_2=ε_k^0+gn_0/2+Ω.Our main findings for elementary excitations in this case are as follows: (i) The global-phase mode is strongly affected by reservoir, such that its dispersion can be diffusive, gapped, or gapless, depending on γ_C/γ_R. By contrast, the relative-phase mode is always undamped with a gapped real energy; (ii) The global-phase mode copolarizes with the condensate while the relative-phase mode cross-polarizes with it, similar as that of the equilibrium condensate <cit.> despite effects pumping and decaying. In the following, we will first present our results on the excitation spectrum for various reservoir parameters, before discussing polarization of collective modes.§.§.§ Excitation spectrum Two limiting cases. - In the limit of vanishing reservoir when n^0_R≈ 0 and R≈ 0, by disregarding the reservoir effect in Eq. (<ref>) in the leading order we findℒ_ k= ([h_1 gn_0/2g_12n_0/2-Ωg_12n_0/2;-gn_0/2 -h_1 -g_12n_0/2 -g_12n_0/2+Ω;g_12n_0/2-Ωg_12n_0/2h_2 gn_0/2; -g_12n_0/2 -g_12n_0/2+Ω-gn_0/2 -h_2 ]).The Bogoliubov excitation spectra are found asħω_D = √(ε_k^0[ε_k^0+(g+g_12)n_0]), ħω_S = √(ε_k^0(ε_k^0+δ g n_0+4Ω)+2Ω(δ g n_0+2Ω)),where δ g=g-g_12, and ω_D and ω_S correspond to energies of global-phase and relative-phase excitations, respectively. The global-phase mode is gapless at k=0, while the relative-phase mode exhibits an energy gap √(2Ω[δ g n_0+2Ω]) due to linear polariton splitting (Ω≠ 0), which closes at the critical interaction strength g_12=g+2Ω/n_0. For small momenta, the global-phase mode exhibits a linear dispersion while the relative-phase mode has an effective mass and a quadratic dispersion ∼k^2. We note that Eqs. (<ref>) and (<ref>) are formally similar to the excitation spectra in an atomic coupled spinor BEC in equilibrium (see, e.g., Refs.<cit.>), except that n_0 here is determined by the open-dissipative nature of polariton fluid.In the opposite limit of fast reservoir, when 1/γ_R is the shortest time scale, an adiabatic elimination of the fast dynamics of reservoir in lowest order of the perturbation theory reduces Eq. (<ref>) to the following matrix form: ℒ_ k= ([ h_1-gΓ/2R-(i/4Γ+ω)gn_0/2-gΓ/2R-i/4Γ g_12n_0/2-gΓ/2R-Ω-i/4Γ g_12n_0/2-gΓ/2R-i/4Γ; -gn_0/2+gΓ/2R-i/4Γ-h_1+gΓ/2R-(i/4Γ+ω)-g_12n_0/2+gΓ/2R-i/4Γ-g_12n_2^0+gΓ/2R+Ω-i/4Γ; g_12n_0/2-gΓ/2R-Ω-i/4Γ g_12n_0/2-gΓ n_0/2R-i/4Γ h_2-gΓ/2R-(i/4Γ+ω)gn_0/2-gΓ/2R-i/4Γ;-g_12n_0/2+gΓ/2R-i/4Γ-g_12n_0/2+gΓ/2R+Ω-i/4Γ -gn_0/2+gΓ/2R-i/4Γ-h_2+gΓ/2R-(i/4Γ+ω) ]). with Γ=Rn_0γ_C/(γ_R+Rn_0). The resulting excitation spectra are found asħω_D = -iΓ/2+√(ε_k^0[ε_k^0+(g+g_12)n_0]-Γ^2/4), ħω_S = √(ε_k^0(ε_k^0+δ g n_0+4Ω)+2Ω[δ g n_0+2Ω]).Comparisons of Eqs. (<ref>) and(<ref>) show ω_S(k) of the relative-phase mode stays the same in the two limits. However, ω_D(k) is strongly modified by the reservoir [see Eqs. (<ref>) and (<ref>)], with the low-lying global-phase mode transforming from the sound mode to diffusive mode <cit.>, whose energy is purely imaginary with the imaginary part behaves as ∼k^2[see Eq. (<ref>)]. Generic case.– For arbitrary parameter γ_R/γ_C, solutions to Eq. (<ref>) can be exactly cast into the following form: [(ħω)^2-(ħω_S)^2]×[(ħω)^3+i(Rn_0+γ_R)(ħω)^2-[Rn_0γ_C+(ħω_D)^2]ħω+ic(k)]=0. Here, ω_D and ω_S are given by Eqs. (<ref>) and (<ref>), andc(k)=-(Rn_0+γ_R)(ħω_D)^2+gn_0γ_Cε_k^0,which tends to zero for k→0. We immediately see from Eq. (<ref>) that:(1) There always exists two real eigen-energy solutions ω=±ω_S for the relative-phase modes, regardless of values of γ_R/γ_C, i.e., the relative-phase modes are not damped due to a decoupling from both the global-phase mode and reservoir modes. Indeed, as confirmed by the numerical results in Figs. <ref>(a) and <ref>(b), the relative-phase modes (black curves) always exhibit a gapped real energy spectrum, and display qualitatively similar features despite variations of γ_R/γ_C. Although, the size of the gap can be tuned via variations of g_12/g [c.f. Figs. <ref>(a1) and <ref>(a2)], as expected from Eq. (<ref>).(2) By contrast, the global-phase modes and reservoir mode according to Eq. (<ref>) display different behavior depending on γ_R/γ_C. In fact, decoupled from the relative-phase mode, the global-phase and reservoir modes are expected to exhibit similar properties as their counterpart in the one-component polariton condensate <cit.>, except for a modification by the interspecies coupling g_12 in ω_D.At k=0, Eq. (<ref>) becomes (ħω)^3+i(Rn_0+γ_R)(ħω)^2-Rn_0γ_Cħω=0, yielding three solutions: ω^(0)_k=0=0 andω^±_k=0=-i(Rn_0+γ_R/2)±√(Rn_0γ_C-1/4(Rn_0+γ_R)^2).Obviously, when γ_C>(Rn_0+γ_R)^2/(4Rn_0), ω^±_k=0 have finite real components representing an energy gap, i.e., E_Δ=(1/2)√(4Rn_0γ_C-(Rn_0+γ_R)^2). In this case, ω^±_k=0 correspond to gapped global-phase modes decaying at a rate [Rn_0+γ_R]/2. That the global-phase mode becomes massive has also been discussed in Ref. <cit.> for the one-component polariton BEC taking into account the effect of the bottleneck polaritons. However, when γ_C≤ (Rn_0+γ_R)^2/(4Rn_0), in particular, when γ_R≫γ_C∼ n_0R, ω^±_k=0 become purely imaginary with ω^+_k=0≈ -i3γ^2_C/(2γ_R) and ω^-_k=0≈ -iγ_R. While ω^-_k=0 corresponds to the fast decaying reservoir mode, both ω^+_k=0 and ω^(0)_k=0 are associated with the diffusive Goldstone modes.At large momenta k≫ a^-1 (a=ħ/√(mgn_0) is the usual coherent length of BEC and we introduce E_0=ħ^2/ma^2), Equation (<ref>) can be approximately solved as: ω^(0)_ka≫ 1∼ -i(Rn_0+γ_R) and ω^±_ka≫ 1∼±ω_D-ig_Rn_0γ_Cħ^2ϵ_k^0/ω_D^2, the former corresponding to the reservoir mode while the latter for the global-phase modes. Since the imaginary part of ω^±_k→∞ scales as ∼1/k^2 and thus vanishes at k→∞, we see that the global-phase modes with large momentum behave universally as free particles without being damped, independent of γ_R and γ_C. Moreover, comparing the damping rate of the reservoir modes at k=0 and k→∞, we see that the reservoir modes exhibit similar damping rate when γ_R≫γ_C∼ Rn_0, as opposed to the case γ_R≪γ_C∼ Rn_0, where the damping rate of the reservoir mode becomes obviously k dependent, increasing from 0 (at k=0) to a value ∼Rn_0 (at k→∞). Thus, whereas the excitation modes almost decouple from each other in the fast reservoir limit, the density excitation significantly mixes with the reservoir if γ_C>γ_R instead.The above analysis is corroborated by the numerical results of excitation spectra for various parameters, as summarized in Fig. <ref>. For the global-phase modes (see red curves), we observe the characteristic Goldstone branch when γ_R≫γ_C [see Figs. <ref>(a1) and  <ref>(a4)], which disappears for γ_R≪γ_C when an energy gap opens instead [see Figs. <ref>(a3) and <ref>(a5)]. Physically, the existence of the Goldstone mode in presence of a fast decaying reservoir can also be understood from the following perspective: when γ_R≫γ_C, the (fast) reservoir is able to adiabatically follow the slow rotation of the condensate phase across the sample, i.e., adiabatically follow the Goldstone mode. At large momenta, the global-phase modes are seen to exhibit Re(ω_k)∼ k^2 and a suppressed damping for all parameters, consistent with earlier discussions. Interestingly, in the crossover regime γ_R∼γ_C where the reservoir effect strongly influences the global-phase excitation, we observe emergence of a dispersion (real part) that possesses a maxon-roton-like character [see Fig. <ref>(a2)], i.e., a softening of an excitation mode occurs at intermediate momentum.Stability analysis. For γ_R≪γ_C∼ Rn_0 and g_12<0, a spatially homogeneous spinor polariton condensate can become dynamically unstable, due to an exponential growth of reservoir modulations with time: the reservoir mode shows Im[ω_k]>0 at small momenta [see black curves in Fig. <ref>(b5)]. Such dynamical instability disappears if g_12>0 is taken instead [see Fig. <ref>(b3)]. To understand this, we seek the condition for a homogeneous spinor condensate to be stable in the considered regime by solving Eq. (<ref>) at small momenta. Recall that in this case, the reservoir mode has ω=0 at k=0, therefore, we expect ω(k) to be small at k→ 0. Retaining only the term linear in ω in Eq. (<ref>), we find ħω_k→ 0≈ ic(k)/(Rn_0γ_C+ω_D^2), with c(k)/E^2_0≈ -2(ka)^2[(1+g_12/g)(Rn_0+γ_R)-γ_C]. Thus in order for the low-lying reservoir mode to be stable requires c(k)≤ 0, giving the stability condition(g+g_12/g)Rn_0+γ_R/γ_c>1.Obviously, for the parameter regime Rn_0∼γ_C≫γ_R, the above criterion sustains for g_12>0 but is violated when g_12<0, explaining what we see in Fig. <ref>(b5). For g_12<0, c(k) can change its sign from positive to negative when the momentum increases to values larger than k_R≠ 0, where k_R is determined by c(k_R)=0 giving k_Ra≈√(2[γ_C/(Rn_0+γ_R)-(g+g_12)/g]), i.e., the stability condition is violated at momenta 0<k<k_R [see Fig. <ref>(b5)], leading to growing perturbations.§.§.§ Polarization of quasiparticles Previous studies on equilibrium linearly polarized condensates (see e.g., <cit.>) have shown that both the global- and relative-phase modes are linearly polarized, one copolarizing and the other cross polarizing with the condensate. In the presence of pumping and decay, as mentioned earlier, the operator ℒ_k governing the BdG equation for a non-equilibrium condensate becomes non-Hermitian and involves coupling to the reservoir excitations [see Eq. (<ref>)]. However, the symmetry properties of ℒ_k matrix in the spin-unpolarized phase (see details in Appendix <ref>) dictate the following exact relations: for the global-phase mode, we haveu_1 k=u_2 k,v_1 k=v_2 k,and for the relative-phase mode, one hasu_1 k=-u_2 k, v_1 k=-v_2 k.Thus, we conclude that for an X-linearly polarized open-dissipative condensate, the global-phase mode remains copolarized with the condensate while the relative-phase mode is cross poloarized with it. For later reference, here we also present the analytical expressions for the Bogoliubov coefficients from solving Eq. (<ref>). For the global-phase mode, we obtainu_1 k = ħω_D+[(g+g_12)n_0+ϵ_k^0]/√(8ħω_D(ϵ_k^0+(g+g_12)n_0)).v_1 k = ħω_D-[ϵ_k^0+(g+g_12)n_0]/√(8ħω_D(ϵ_k^0+(g+g_12)n_0)),and for the relative-phase mode, we findu_1 k = (g+g_12)n_0/2/√((g+g_12)^2n_0^2/2-2[ϵ_k^0+n_0/2(g-g_12)+2Ω-ħω_S]^2),v_1 k = ħω_S-[ϵ_k^0+n_0(g-g_12)/2+2Ω]/√((g+g_12)^2n_0^2/2-2[ϵ_k^0+n_0(g-g_12)/2+2Ω-ħω_S]^2).As |u_1 k|=|u_2 k| and |v_1 k|=|v_2 k| apply for both modes, we plot |u_1k|^2/|v_1k|^2 in Figs. <ref>(c1)-<ref>(c5) for various reservoir parameters. For γ_R≫γ_C, we see that the eigenvectors of the global-phase mode show the usual infrared divergence v_1 k→ k^-1/2 and u_1 k→ -v_1 k at k→ 0, giving rise to |u_1 k/v_1 k|=1 at k=0. By contrast, for γ_R/γ_C≪ 1 when the global-phase modes become gapped, we see that |u_1 k/v_1 k|>1 for momenta k→ 0 [see red curves in Figs. <ref>(c3) and <ref>(c5)]. Similar behavior is also observed in the plot of |u_2 k/v_2 k| for the gapped relative-phase mode at small momenta [see black curves in Fig. <ref>(c)]. §.§ Elementary excitations from the elliptically polarized condensates We now turn to the spin-polarized phase where the condensate is elliptically polarized in the regime g_12>g+2Ω/n_0. The BdG Eq. (<ref>) now takes the form (we choose S_z>0 in our calculation) ℒ_ k= ([ h_1 gn_0(1+Δ)/2g_12n_0(1-Δ)/2-Ω√(1-Δ/1+Δ)g_12n_0(1-Δ)/2g_Rn_R^0+i/2Rn_R^0;-gn_0(1+Δ)/2-h_1 -g_12n_0(1-Δ)/2 -g_12n_0(1-Δ)/2+Ω√(1-Δ/1+Δ) -g_Rn_R^0+i/2Rn_R^0;g_12n_0(1-Δ)/2-Ω√(1+Δ/1-Δ)g_12n_0(1-Δ)/2 h_2 gn_0(1-Δ)/2g_Rn_R^0+i/2Rn_R^0; -g_12n_0(1-Δ)/2 -g_12n_0(1-Δ)/2+Ω√(1+Δ/1-Δ)-gn_0(1-Δ)/2-h_2 -g_Rn_R^0+i/2Rn_R^0; -in_0/2(1+Δ)R -in_0/2(1+Δ)R -in_0/2(1-Δ)R -in_0/2(1-Δ)R-i[Rn_0+γ_R] ]), with Δ=√(1-[2Ω/(g-g_12)n_0]^2), h_1=ε_k^0+gn_0Δ+g_12n_0(1-Δ)/2, and h_2=ε_k^0-gn_0Δ+g_12n_0(1+Δ)/2.Compared to the spin-unpolarized phase, our main findings on the elementary excitations from an elliptically polarized condensate are as follows: (i) The relative-phase mode becomes weakly damped. (ii) Polarization properties of the modesare determined by interplay among the parameter γ_R/γ_C of reservoir, momentum k of the mode, and the circular polarization degree s_z=2S_z/n_0 of the condensate. Following, we present a detailed analysis on energy spectrum and polarization of modes in regimes of fast and slow reservoirs, respectively, as well as for elliptical polariton condensates with different circular polarization degree s_z.§.§.§ Excitation spectrum Figures <ref>(a) and <ref>(b) illustrate the excitation spectra for various parameters. Different from the spin-unpolarized phase, the relative-phase modes in the spin-polarized phases show a very weak damping [see black curves in Fig. <ref> (b) and insets], along with a gapped (real part) energy spectrum. In addition, the global-phase modes and the reservoir modes exhibit similar features as that of the spin-unpolarized phase.To gain more understanding on the excitation spectra illustrated in Fig. <ref> (a) and <ref>(b), consider first the limit of vanishing reservoir, where Eq. (<ref>) reduces to ℒ_ k= ([ h_1 gn_0(1+Δ)/2g_12n_0(1-Δ)/2-Ω√(1-Δ/1+Δ)g_12n_0(1-Δ)/2;-gn_0(1+Δ)/2-h_1 -g_12n_0(1-Δ)/2 -g_12n_0(1-Δ)/2+Ω√(1-Δ/1+Δ);g_12n_0(1-Δ)/2-Ω√(1+Δ/1-Δ)g_12n_0(1-Δ)/2 h_2 gn_0(1-Δ)/2; -g_12n_0(1-Δ)/2 -g_12n_0(1-Δ)/2+Ω√(1+Δ/1-Δ)-gn_0(1-Δ)/2-h_2 ]). Solutions of the eigen-energies are found as: (ħω_S/D)^2 = ε_k^0(ε_k^0+g_12n_0)+1/2β-2Ω^2± √([(ε_k^0(g_12-2g)n_0+1/2β]^2+Ω^2[ħ^4k^4(3g-g_12)/m(g_12-g)+2ħ^2k^2/mn_0(2g-g_12)-2β]+4Ω^4), where β=(g-g_12)^2n_0^2 and, as before, ħω_S and ħω_D represent energies of the relative-phase mode and the global-phase mode, respectively. As in the spin-unpolarized phase, the global-phase mode is gapless while the spin mode has an energy gap ω_S(k)→√((g-g_12)^2n_0^2-4Ω^2).In the opposite limit γ_R/ γ_C≫ 1 [see Figs. <ref>(a1) and <ref>(a2), <ref>(b1) and <ref>(b2)], Equation (<ref>) at the lowest order of 1/γ_R reads as: ℒ_ k= ([ h_1 g̃n_1^0g̃_12n_2^0-Ω√(n_2^0/n_1^0)g̃_12n_2^0;-g̃^*n_1^0-h_1^* -g̃_12^*n_2^0 -g̃_12^*n_2^0+Ω√(n_2^0/n_1^0);g̃_12n_1^0-Ω√(n_1^0/n_2^0)g̃_12n_1^0 h_2 g̃n_2^0; -g̃_12^*n_1^0 -g̃_12^*n_1^0+Ω√(n_1^0/n_2^0)-g̃^*n_2^0-h_2^* ]),with g̃=g-γ(g+i/2R), g̃_12=g_12-γ(g+i/2R), γ=γ_C/(n_0+γ_R), h_1=ε_k^0+g_12n_2^0+[g-γ(g+i/2R)]n_1^0-gn_2^0, and h_2=ε_k^0+g_12n_1^0+[g-γ(g+i/2R)]n_2^0-gn_1^0. The eigen-energy is solution to the following equation:[(ħω)^2-a^2]×[(ħω-ib)^2-c^2]+d=0,withb = -1/2n_0Rγ,a^2 = [ε_k^0+(g_12-g)n_0][ε_k^0+(g_12-g)(n_1^0-n_2^0)],c^2 = -gn_0ε_k^0γ+(ε_k^0)^2-1/4n_0^2R^2γ^2+ε_k^0(g_12-g)(n_1^0-n_2^0)+ε_k^0n_0(g+g_12),d = -4ε_k^0Ω^2[n_0(g_12-g)+ε_k^0]n_1^0-n_2^0/n_0. At k=0, Eq. (<ref>) allows easy solutions: (i) solutions ω_s=±√((g_12-g)^2n_0^2-4Ω^2) correspond to the energy gap of the relative-phase modes [see black curves in Figs. <ref>(a1) and <ref>(a2), <ref>(b1) and <ref>(b2)]; (ii) solutions ω_D^(1)=0 and ω_D^(2)=-in_0RΓ/2 are associated with the global phase excitation at k=0, near which the imaginary part scales as ∼k^2 [see red curves in Fig. <ref>(b1) and <ref>(b2)], indicating the existence of diffusive Goldstone mode [see also red curves in Figs. <ref>(a1) and <ref>(a2)]. Noticing that the energy gap of the relative-phase mode is formally the same as the equilibrium counterpart [see Eq. (<ref>) at k=0], we anticipate the real part of the energy of the relative-phase mode is qualitatively similar for all reservoir parameters as in the linearly polarized condensate, consistent with what we see in Fig. <ref>(a).For arbitrary value of γ_R/γ_C, one can show that Eq. (<ref>) yields the following equation: [(ħω)^2-(ħω_D)^2][(ħω)^2-(ħω_S)^2][i(Rn_0+γ_R)+ħω]+i(n_0γ_C)(għ^2k^2/m+iR(ħω))[(ħω)^2+F(k)]=0, Here, F(k)=4Ω^2-[n_0(g_12-g)+ϵ_k^0]^2+2Ω^2k^2/(g_12-g), and ω_S/D are defined in Eq. (<ref>). Importantly, Equation (<ref>) suggests that the relative-phase mode solutions are no longer decoupled from the other modes, and, therefore, a damping in the relative-phase mode is generically expected [see black curves in Fig. <ref>(b) and insets]. In addition, at k=0, Eq. (<ref>) admits solutions: ω=0, ω=-i(Rn_0+γ_R)/2±√(Rn_0γ_C-(Rn_0+γ_R)^2/4), and ω=±ω_S (evaluated at k=0). Indeed, the energy gap of the relative-phase modes stays the same without modification from the reservoir, as confirmed by our numerical results in Fig. <ref>(a). Moreover, a diffusive mode exists as long as γ_C≤ (Rn_0+γ_R)^2/(4Rn_0) holds, whereas an energy gap opens in the spectrum of the density excitation for γ_C>(Rn_0+γ_R)^2/(4Rn_0), agreeing with the results in Fig. <ref>(a) (see red curves). §.§.§ Polarization of quasiparticles We have shown in Sec. <ref> that, in the spin-unpolarized phase, the global- and relative-phase modes are copolarized and cross polarized with the condensate, respectively, just as the equilibrium counterpart. As we will show, situations are significantly different for the spin-polarized phase. In analyzing mode polarization, we are interested in the quantities |u_1k|^2/|u_2k|^2 and |v_1k|^2/|v_2k|^2, as motivated by Ref. <cit.>. There, it is proposed that the degree of polarization of the collective modes can be partially probed through the measurement of normal polar polarization angle of the eigenvector defined by cos(2θ)=(|u_1k|^2-|u_2k|^2)/(|u_1k|^2+|u_2k|^2), or equivalently, |u_1k|^2/|u_2k|^2=[cos(2θ)+1]/[cos(2θ)-1]. In Figs. <ref>(c) and <ref>(d), we plot |u_1k|^2/|u_2k|^2 and |v_1k|^2/|v_2k|^2 as functions of momenta, respectively, for both the global- and relative-phase modes (only those with positive real energies are shown). We note, however, that detailed discussions on the measurement of polarization of quasiparticles of the spinor polariton condensate are beyond the scope of this paper.To see how the reservoir affects the polarization of modes in general, we first consider a strongly circularly polarized condensate (s_z≈ 0.9), and compare polarization of [u_1k, u_2k]^T of both modes in the regime of a fast reservoir [see Fig. <ref>(c1)] and a slow reservoir[see Fig. <ref>(c3)], respectively. In both regimes, the relative-phase mode (black curves) is seen to be circularly polarized at all momenta with a circular polarization opposite to the condensate. By contrast, [u_1k, u_2k]^T of the global-phase mode (red curves) is elliptically polarized, whose polarization direction in particular at small momenta is strongly influenced by the reservoir: it exhibits a dramatic, even irregular, variation near k=0 in the fast reservoir regime [see Fig. <ref>(c1)], as opposed to a more regular and smooth behavior in the slow reservoir case [see Fig. <ref>(c3)]. This can be understood by noticing the different energy spectra in the two reservoir regimes: the global-phase mode exhibits diffusive dispersion in the limit of fast reservoir, whereas being gapped for a slow reservoir. At large momenta where the reservoir effect becomes unimportant, the global-phase mode in both plots exhibits similar polarization properties.Similar features as above also appear for the global-phase modes of the condensate with s_z≈ 0.1, i.e., only a small asymmetry exists between the spin-up and spin-down components, as illustrated in Figs. <ref>(c2) and <ref>(c4). In particular, [u_1k, u_2k]^T of the global-phase mode in the fast reservoir regime [see red curve in Fig. <ref>(c2)] exhibits rich variations near k=0, contrasting to a more flattened behavior in the slow regime [see Fig. <ref>(c2)]. However, [u_1k, u_2k]^T of the relative-phase mode is significantly affected by s_z. When the condensate is highly circular (s_z∼ 1),|u_1k|^2/ |u_2k|^2≈ 0 at all momenta [see black curve in Fig. <ref>(c1)], i.e., [u_1k, u_2k]^T always has a strong strong circular polarization which is opposite to the condensate. Instead,for s_z≪ 1 counterpart [see black curve in Fig. <ref>(c2)], [u_1k, u_2k]^T is strongly circular at k=0 and becomes elliptical away from it, with some strong variation near k=0. In the limit of large momenta, |u_1k|^2/|u_2k|^2 saturates to a constant smaller than 1, which corresponds to an elliptical polarization whose circular polarization is opposite to the condensate.Compared to [u_1k, u_2k]^T in Fig. <ref>(c), we see that the circular polarization of [v_1k, v_2k]^T of the relative-phase mode is always opposite to [u_1k, u_2k]^T at k=0 (same as the condensate), but is same as [u_1k, u_2k]^T at large momenta, as illustrated by the black curves in Figs. <ref>(d1)-<ref>(d4). For strongly circularly polarized condensate [see Figs. <ref>(d1) and <ref>(d3)], [v_1k, v_2k]^T is highly circularly polarized at all momenta, flipping its circular polarization rapidly from the left to the right when k changes from k=0. Such flip is more steep when the condensate has s_z≪ 1 [see black curves Figs. <ref>(d2) and <ref>(d4)]. In this case, [v_1k, v_2k]^T of the relative-phase mode is elliptically polarized at large momenta. For the global phase mode (red curves), the corresponding [v_1k, v_2k]^T is always elliptically polarized, displaying a more uniform behavior in the slow reservoir regime [see Figs. <ref>(d3) and <ref>(d4)] and smaller ellipticity for small s_z [see Figs. <ref>(d2) and <ref>(d4)].§ PHOTOLUMINESCENCE OF A SPINOR POLARITON CONDENSATE In this section, we discuss how to experimentally probe the various Bogoliubov dispersions of a spinor polariton condensate presented in this work. Recently, significant experimental progress has been achieved on measuring elementary excitations of a one-component polariton condensate, aiming particularly at observing two features: (1) the diffusive modes, which originate from the driven-dissipative character of the system <cit.>; (2) the negative-energy ghost branch (GB) of the Bogoliubov dispersion, which arises from the hole component of excitations and thus mirror images the normal positive-energy branch (NB). In experiments reported in Refs. <cit.>, while NB has been directly observed in the photoluminescence (PL) of a nonresonant polariton condensate<cit.>, no fingerprints of GB were spotted. Since the GB signal may be easily masked by strong emission, whether it is possible to detect GB in a non-resonant polariton BEC was subjected to debate. On the other hand, the GB dispersion was detected in a resonantly pumped polariton condensate using the four-wave mixing technique, as reported in Ref. <cit.>. Remarkably, the first successful observation of the PL signal reflecting GB in a non-resonant polariton condensate was reported in Ref. <cit.>. Motivated by these experimental advances, below we are interested in studying the PL spectra of a spinor polariton BEC under non-resonant pumping, in particular, the visibility of the ghost branches of dispersions of both the global- and relative-phase modes, extending relevant work on one-component polariton BECs <cit.>.At the heart of the measurement of the excitation spectrum with PL is the measurement of two-time correlation function of the spinor polariton condensate. Denoting the PL spectrum by PL(k,ω), we exploit the approach in Refs. <cit.> and calculate as follows. Defining a matrix V^-1 in the formV^-1=([ u_11 u_12 u_13 u_14 u_15; u_21 u_22 u_23 u_24 u_25; u_31 u_32 u_33 u_34 u_35; u_41 u_42 u_43 u_44 u_45; u_51 u_52 u_53 u_54 u_55 ]),we diagonalize the Bogoliubov's matrix [see Eq. (<ref>)] as V^-1ℒ_ kV=(E_1,-E_1^*,E_1,-E_2^*,E_3). Here, the eigenvalues E_1, E_2, and E_3 correspond to the density mode, spin-density mode, and reservoir mode, respectively. The PL spectrum can then be derived as PL(k,ω) ∝ |C_k|^2Re{i n_1k(u_11u_22+u_31u_42)/ħω-E_1+i (n_1k+1)(u_12u_21+u_32u_41)/ħω+E_1^*+i n_2k(u_13u_24+u_33u_44)/ħω-E_2+ i(n_2k+1)(u_14u_23+u_34u_43)/ħω+E_2^*}, where C_k is called the Hopfield coefficient for the photonic component of the polaritons, and n_1k (n_2k) is the thermal population of quasiparticles associated with the density mode (spin mode). Different from the PL spectrum of a one-component poariton condensate, Eq. (<ref>) involves contributions from both the global- and relative-phase excitations: the first (last) two terms correspond to the global-phase (relative-phase) mode, with the negative branch contained in the second (fourth) term, respectively. In Fig. <ref>, we present the PL spectrum of a spinor polariton condensate for various parameters.Let us first compare the PL spectrum in the spin-unpolarized phase and the spin-polarized phase considering, for example, the parameter regime γ_R≪γ_C and high temperature (larger than the relevant energy gap). As illustrated in Figs. <ref>(a1) and <ref>(c1), both plots show the positive and negative branches of the dispersions, due to large populations at high temperatures. Yet, a prominent feature of Fig. <ref>(c1) for the spin-polarized phase, as compared to Fig. <ref>(a1) for the spin-unpolarized phase, is the appearance of two separated sectors of dispersions, both in the positive and negative branches. This second dispersion sector, with a very narrow linewidth, corresponds to the spin mode which is weakly damped in the spin-polarized phase. Instead, the dispersion of the relative-phase mode does not appear in the spin-unpolarized phase [see Fig. <ref>(a1)], as the relative-phase mode there only has real energy and is undamped [see Fig. <ref>(b)]. Thus, for the spin-unpolarized phase in the present setup, only the global-phase excitation sector of dispersions are revealed by PL spectrum, [see Fig. <ref>(a) and <ref>(b) for various parameters]. As illustrated in Fig. <ref>(c2), at low temperatures, only the negative-energy ghost branch of the relative-phase mode becomes visible, due to strongly suppressed thermal population of the gapped positive-energy relative-phase mode. In contrast, the relative-phase excitation sector cannot be distinguished in Fig. <ref>(c3) corresponding to a fast reservoir regime. This can be attributed to the fact that, for this particular parameter choice, the real part of the spectra of the global-phase mode is very close to the relative-phase mode [see Fig. <ref>(a2)]. To conclude, the negative-energy ghost branch of the relative-phase mode can be clearly distinguished from that of the global-phase mode in the PL spectra of a spinor polariton condensate in the spin-polarized phase, where the relative-phase mode is weakly damped. Its resolution is optimal when the (real part) energy of the relative-phase mode is sufficiently separated from the global-phase mode, such that it is still resolvable after taking into account of the finite linewidth of the spectrum of the global-phase mode.Turning back to the spin-unpolarized phase, as discussed above, here only dispersions of the global-phase mode can be visualized in the PL spectrum. In general, the width of the spectrum exhibits a significant broadening for γ_R≪γ_C [see, e.g., Fig. <ref>(a1)] reflecting a strong damping of the global-phase mode [see also Fig. <ref>(b3)]. Instead, a much narrower spectrum is shown in the fast reservoir limit [see Fig. <ref>(a3)], where the global-phase mode only weakly decays. Notice that in Fig. <ref>(a3), despite high temperature, no fingerprints of the negative-energy dispersion of global-phase mode is observed. This is due to the fact that the corresponding Bogoliubov coefficients in Eq. (<ref>) nearly vanish for this specific parameter choice. Considering experimental relevant parameters, we plot the corresponding PL spectrum in the spin-unpolarized phase in Figs. <ref>(b). Compared to Figs. <ref>(a), we see that the sign of g_12, does not qualitatively change the features of the PL spectrum in the spin-unpolarized phase. § CONCLUSION Summarizing, we have theoretically studied the steady phases and elementary excitations of a spinor polariton condensate created by non-resonant excitation, assuming that g_12/g can be widely tuned, and fast spin-relaxation in reservoir. In the regime g_12<g+2Ω/n_0, the polariton condensate is in the spin-polarized phase, exhibiting a linear polarization whose direction is pinned by the Ω term. In the regime g_12>g+2Ω/n_0, the condensate is in the spin-polarized phase, exhibiting an elliptical polarization. The transition occurs at the critical interaction, where the spin-density response function diverges and the energy gap of the relative-phase mode closes. We have compared behavior of elementary excitations in two phases, taking into accountreservoir effects in the crossover from γ_R≪γ_C to the limit of γ_R≫γ_C.We have shown that the gapped relative-phase modes are long lived in both phases and are robust to reservoir effects. The energy spectrum of the global-phase modes, by contrast, are strongly tailored by the reservoir effect, being gapped in the slow reservoir limit but diffusive in the opposite fast reservoir regime. In the spin-unpolarized phase, the mode polarization is always linear, one copolarizing with the condensate and the other cross polarizing with it. However, in the spin-polarized phase, the reservoir effect has a particular strong impact on polarization of the global-phase mode at small momenta. While the polarization of relative-phase mode is weakly influenced by reservoir, it is sensitive to the circular polarization degree of the condensate. In addition, we have demonstrated that the energy dispersions presented in this work can be directly observed in the PL emission. In particular, we show that the negative-energy ghost branch of the dispersion of the relative-phase mode can be clearly visualized in the spin-polarized phase, exhibiting a very narrow linewidth and distinguishable from that of global-phase mode. That the relative-phase mode is undamped in the spin-unpolarized phase, leading to its absence in the corresponding PL spectrum, may be related to the fact that in our model the reservoir excitons are assumed to be coupled to the total density, rather than the spin density, of the spinor polariton condensate [see Eq. (<ref>)]. Hence, an interesting subject of investigation in the future consists of cases with asymmetric couplings to reservoir polaritons or asymmetric decay rates of condensate polaritons.We thank Y. Xue, C. Gao, and B. Wu for stimulating discussions. This is supported by the NSFC of China (Grants No. 11274315 and No. 11374125) and Youth Innovation Promotion Association CAS (Grant No. 2013125). Y. H. acknowledges support from Changjiang Scholars and Innovative Research Team in University of Ministry of Education of China and PCSIRT (Grant No. IRT13076) the NSFC of China (Grant No. 11434007). Z. D. Z. is supported by the NSFC of China (Grant No. 51331006). § SYMMETRY ANALYSIS OF BOGOLIUBOV MATRIX (<REF>) In this appendix, we analyze the symmetry properties of Bogoliubov matrix ℒ_k in Eq. (<ref>) in the linearly polarized case. We notice that ℒ_k is invariant under the following two transformations:U_1 =([ 0 0 1 0 0; 0 0 0 1 0; 1 0 0 0 0; 0 1 0 0 0; 0 0 0 0 1 ]),U_2 =([00100;000 -10;10000;0 -1000;00001 ]).The consequence of this symmetry property is, if V=([ u_1 k v_1 k u_2 k v_2 kw_ k ])^T is an eigenvector of Bogoliubov matrix (<ref>), the action of U_1 (U_2) on V realizes a simultaneous exchange: u_1 k↔ u_2 k and v_1 k↔ v_2 k(u_1 k↔-u_2 k and v_1 k↔-v_2 k ), such that U_1V (U_2V) is also an eigenvector of ℒ_k with the same eigen-value.
http://arxiv.org/abs/1704.08439v2
{ "authors": [ "Xingran Xu", "Ying Hu", "Zhidong Zhang", "Zhaoxin Liang" ], "categories": [ "cond-mat.quant-gas" ], "primary_category": "cond-mat.quant-gas", "published": "20170427054643", "title": "Spinor polariton condensates under nonresonant pumping: Steady states and elementary excitations" }
An Expanded Chemo-dynamical Sample of Red Giants in the Bar of the Large Magellanic Cloud and Ian U. Roederer December 30, 2023 =========================================================================================§ INTRODUCTION Nonlinear field models are of great interest in various areas of modern physics. Such models can have topological defects (or topological solitons) — solutions that are homotopically distinct from the vacuum <cit.>. Such defects (domain walls, strings, vortices, kinks) arise, for example, in high energy physics, cosmology, condensed matter, and so on. We notice also the impressive progress in various scenarios with embedded topological defects, e.g., a Q-lump on a domain wall, or a skyrmion on a domain wall <cit.>.Special attention is paid in this context to (1+1)-dimensional models. Many realistic models in (3+1) and (2+1) dimensions can be reduced to the effective (1+1)-dimensional dynamics. For example a two- or three-dimensional domain wall in the direction orthogonal to it can be viewed as a topological soliton (kink) interpolating two different vacua of the model, which are separated by the wall in the two- or three-dimensional world. Besides that, (1+1)-dimensional models can be used as a simplified setup for studying general properties of nonlinear field models <cit.>.Topological solitons (kinks) in (1+1)-dimensional field models have been actively studied recently <cit.>. In particular, the kink-(anti)kink scattering and the interactions of kinks with impurities are of growing interest. A wide variety of phenomena emerges in these systems, e.g., escape windows and quasi-resonances in kink-(antikink) collisions <cit.>, resonant interactions of kinks with wells, barriers and impurities <cit.>, non-radiative energy exchange in multi-soliton collisions <cit.>. It is interesting that the presence of a kink's internal modes does not guarantee the appearance of resonance windows, as it has been recently shown for the deformed ϕ^4 model <cit.>.The interactions of kinks (and antikinks) are studied using different methods, in particular, quasi-exact methods such as the numerical solution of the equations of motion, which are partial differential equations, and approximate methods such as the collective coordinate approximation <cit.> and the Manton's method <cit.>. The simple collective coordinate approximation describes the dynamics of the kink-(anti)kink system in terms of the time dependence of the distance between the kinks, while the Manton's method allows one to estimate the interaction between the kink and the (anti)kink at large distances by the use of the kinks' asymptotics.The dynamic properties of the kink-antikink collisions have been extensively investigated in integrable and non-integrable models. It was shown that the energy loss due to the radiation during the collision is small in integrable models. In contrast to that, in non-integrable models the radiation effects become important. The amount of radiation is a complicated function of the initial velocity, and depending on it the result of a kink-antikink collision can be very different: the solitons can form an oscillating bound state, or they can bounce back and reflect from each other. The collision process in non-integrable models is chaotic, in the sense that at some values of the initial velocities the kinks scatter off each other, while at other initial velocities they can annihilate. This behavior is a consequence of resonances between the oscillations of the kink's pairs and the excitations of their vibrational modes. There is no exhaustive theoretical model for describing all the features of collisions of solitons in non-integrable models, and the best way to investigate the properties of such systems is numerical simulation.In our previous publications <cit.> we have studied multi-kink collisions in the sine-Gordon and ϕ^4 models. We have shown that the maximal values of the total energy density can be achieved if all N kinks/antikinks collide at the same point. This happens when the kinks and antikinks approach the collision point in an alternating order (i.e. no two adjacent solitons are of the same type). When arranged in this way, the solitons attract each other and their cores can merge producing high energy density spots. The effect of the kink's internal mode on the maximal total energy density was studied for the ϕ^4 model <cit.>. It has been shown that the kink's internal mode can increase or decrease the value of the energy density that can be produced at the collision point.In this paper we study the collisions of N≤ 4 kinks of the ϕ^6 model numerically. The (1+1)-dimensional ϕ^6 model is well-known in the literature <cit.>, but the study of simultaneous multi-kink collisions in this model is carried out for the first time. A simultaneous collision of several kinks in a small region can produce a very large energy density in this region. Such regions could be of great interest in studying various physical systems described by (1+1)-dimensional field-theoretical models.Our paper is organized as follows. In section <ref> we briefly describe the (1+1)-dimensional ϕ^6 model and its topologically non-trivial solutions — kinks and antikinks. In section <ref> we describe our method and present the results of the numerical study of the collisions of N=2 (subsection <ref>), N=3 (subsection <ref>), and N=4 (subsection <ref>) kinks at the same point. In section <ref> we give the conclusion and an outlook.§ THE MODEL The ϕ^6 model in (1+1)-dimensional space-time is described by the Lagrangian density ℒ = 1/2(∂ϕ/∂ t)^2 - 1/2(∂ϕ/∂ x)^2 - V(ϕ), where ϕ(x,t) is a real scalar field. The potential V(ϕ), which defines the self-interaction of the field, has the form V(ϕ)=1/2ϕ^2(1-ϕ^2)^2. The energy functional corresponding to the Lagrangian (<ref>) is E[ϕ]=∫_-∞^+∞[1/2(∂ϕ/∂ t)^2 + 1/2(∂ϕ/∂ x)^2 + V(ϕ)]dx. The Lagrangian (<ref>) yields the equation of motion for the field ϕ(x,t): ∂^2ϕ/∂ t^2 - ∂^2ϕ/∂ x^2 + dV/dϕ=0. The potential (<ref>) is non-negative function, and it has three degenerate minima (vacua of the model): ϕ̅_1=-1, ϕ̅_2=0, and ϕ̅_3=1, with V(ϕ̅_1)=V(ϕ̅_2)=V(ϕ̅_3)=0, see figure <ref>. Therefore the model has topological soliton solutions (kinks) — static field configurations ϕ_K(x) interpolating between neighboring vacua.We use the following notation: a kink ϕ_K(x) is said to belong to the topological sector (ϕ̅_i,ϕ̅_j) if lim_x→-∞ϕ_K(x)=ϕ̅_i and lim_x→+∞ϕ_K(x)=ϕ̅_j. We also denote this kink by using ϕ_(ϕ̅_i,ϕ̅_j)(x) instead of ϕ_K(x).The kinks of the ϕ^6 model can be easily found analytically by solving eq. (<ref>). In the static case ∂ϕ/∂ t=0, and we obtain d^2ϕ/dx^2=dV/dϕ. This equation can be reduced to the first order ordinary differential equation dϕ/dx=±√(2V(ϕ)). Because the potential (<ref>) has three minima, there are two kinks and two antikinks in the model, see figure <ref>.The kinks belong to the topological sectors (-1,0) and (0,1), while the antikinks belong to the sectors (0,-1) and (1,0). Below in some cases we use the term “kink” instead of the term “antikink” for brevity.For future convenience, we write out here all static kinks and antikinks of the ϕ^6 model: ϕ_(0,1)(x) =√(1+tanh x/2), ϕ_(1,0)(x) = √(1-tanh x/2),ϕ_(-1,0)(x) = -√(1-tanh x/2), ϕ_(0,-1)(x) = -√(1+tanh x/2).Notice that the kinks of the ϕ^6 model are asymmetric with respect to the spatial reflection. Consider, e.g., the kink ϕ_(0,1)(x). At |x|≫ 1 we have the following asymptotics:ϕ_(0,1)(x)∼ e^x,x→-∞, ϕ_(0,1)(x)∼ 1-1/2e^-2x,x→+∞. The mass of each (anti)kink is M_K=1/4, as can be obtainted by substituting eqs. (<ref>) or (<ref>) into the energy functional (<ref>). A moving kink or antikink can be obtained from eqs. (<ref>) and (<ref>) by the Lorentz boost. Assume that the kinks (<ref>), (<ref>) are moving along the x-axis with the velocity v. Then for such moving kinks we haveϕ_(0,1)(x,t) = ϕ_(0,1)(γ(x-vt)),where γ = 1/√(1-v^2) is the Lorentz factor.The total energy (<ref>) can be split into three parts: the kinetic energy K, the gradient energy U, and the potential energy P, E = K + U + P. To do that, the integrand in (<ref>), i.e. the total energy density ε(x,t), is written as ε(x,t) = k(x,t) + u(x,t) + p(x,t), where k(x,t) = 1/2(∂ϕ/∂ t)^2,u(x,t) = 1/2(∂ϕ/∂ x)^2,p(x,t) = 1/2ϕ^2(1-ϕ^2)^2 are the kinetic energy density, the gradient energy density, and the potential energy density, respectively. The gradient energy density, in turn, can be expressed as u(x,t) = 1/2e^2(x,t), where e(x,t)=∂ϕ/∂ x is the field gradient, which can be positive or negative corresponding to “stretching” or “compression”.For example, in the case of one moving kink ϕ_(0,1)(x,t) we have:p(x,t) = 1/16 1-tanh[γ(x-vt)]/cosh^2[γ(x-vt)] = 1-v^2/v^2 k(x,t) = (1-v^2) u(x,t).The total energy density isε(x,t) = 1/8 1/1-v^2 1-tanh[γ(x-vt)]/cosh^2[γ(x-vt)].Integrating this expression with respect to x over the interval from -∞ to +∞, we obtain the total energy of the moving kink:E_K = ∫_-∞^+∞ε(x,t) dx = M_K/√(1-v^2),where M_K=1/4 is the mass of kink, i.e. the energy of the static ϕ^6 kink.Collisions of the kinks of the (1+1)-dimensional ϕ^6 model can be investigated numerically. In the next section,we present the results of our numerical simulations of collisions of two, three, and four kinks at the same point. Our goal is to find the maximal (over the spatial coordinate x and the temporal coordinate t) values of the energy densities: kinetic, gradient, potential, and total. We also find the maximal values of the field gradient for each collision. Note that for one moving kink these extreme values can be obtained analytically:p_max^(1) = 2/27,k_max^(1) = 2/27v^2/1-v^2,u_max^(1) = 2/271/1-v^2, ε_max^(1) = 4/271/1-v^2,ande_max^(1) = 2/3√(3)1/√(1-v^2). § NUMERICAL RESULTSWe study the collisions of several ϕ^6 kinks and antikinks at the same point (in a small region, to be more accurate). Because of the absence of analytic multisolitonic solutions for the ϕ^6 model, we use initial conditions in the form of superposition of several kinks and antikinks, which are moving towards the collision point. For every event we adjust the initial positions and initial velocities of kinks to ensure their collision at one point.The initial distances between the kinks in our simulations are quite large. Therefore the overlap of the kinks is exponentially small, i.e. the initial configurations are solutions of eq. (<ref>) with exponential accuracy.For the numerical study of the evolution of the initial configurations we use the discretized version of the equation of motion (<ref>): d^2ϕ_n/dt^2 - 1/h^2(ϕ_n-1 -2ϕ_n+ϕ_n+1) +1/12h^2(ϕ_n-2-4ϕ_n-1 +6ϕ_n-4ϕ_n+1+ϕ_n+2)+ϕ_n(1-ϕ_n^2)(1-3ϕ_n^2)= 0, where h is the lattice spacing, n=0,±1,±2,..., and ϕ_n(t)=ϕ(nh,t). In order to minimize the finite-spacing artifacts, the term ϕ_xx in eq. (<ref>) is discretized with the accuracy O(h^4) <cit.>. The equations of motion (<ref>) are integrated with respect to time using an explicit scheme with the time step τ (the value used in the calculations is τ=0.005) and the accuracy O(τ^4). We use the Störmer method for the integration of eq. (<ref>). In order to be sure that the maximal values of the energy density and the field gradient converge, we perform numerical simulations with different lattice spacings: h = 0.1, h = 0.05.In the numerical simulations presented in this section we use 5000 points for the spatial grid, corresponding to the x range from -250 to 250 for h=0.1 and from -125 to 125 for h=0.05. Hence the spatial boundaries are far away enough and cannot affect the numerical results. In addition, we also use the absorbing boundary conditions in order to prevent the small amplitude radiation reflected from the boundaries. Below we report the results of the maximal energy densities and extreme values of the field gradient in the collisions of N slow-moving kinks and antikinks for 1<N ≤ 4 [for the case N=1 these values can be found analytically, see eqs. (<ref>) and (<ref>)].Notice that in some figures we cut off high peaks in order to show the whole space-time picture of the collision better. §.§ Collision of two kinks§.§.§ The configuration (0,1,0) Now we study the collision of the kink (0,1) and the antikink (1,0), the initial configuration denoted as (0,1,0), or KK̅. According to this, the initial condition is taken as the kink ϕ_(0,1)(x-x_1,t) and antikink ϕ_(1,0)(x-x_2,t), placed at x_1=-10 and x_2=10. The initial velocities are v_1=0.1 and v_2=-0.1. As already mentioned, there is no exact two-soliton solution in the ϕ^6 model, and we use the following initial configuration: ϕ_(0,1,0)(x,t) = ϕ_(0,1)(x-x_1,t)+ϕ_(1,0)(x-x_2,t)-1. At |x_1-x_2|≫ 1, this is a solution of the equation of motion up to the exponentially small overlap of the kink and the antikink.In the collisions of the kinks (0,1) and (1,0), there is a critical value of the initial velocity v_cr≈ 0.289 that separates two different regimes of the collision process. If the initial velocity is less than v_cr then the kink and antikink become trapped after the collision, forming a bound state (a bion). At initial velocities larger than v_cr the kink and the antikink escape to infinity after the collision. For further details see, e.g., refs. <cit.> and references therein.In our numerical simulation the initial velocity is v_1=-v_2=0.1, which is less than v_cr. This means that a bion should be formed after the collision. The bion stays near the collision point and emits energy in the form of small-amptlidude waves. We thus observe the “reaction” KK̅→ b, where b stands for the bion.The numerical results for the configuration (0,1,0) are presented in figure <ref>. In figure <ref> we show the dependence ϕ(x,t), which demonstrates the main features of the collision process. Figure <ref> demonstrates the space-time dependence of the total energy density ε(x,t). From this figure we see that the two peaks of the energy density are moving towards each other, collide, and form a bound state, which decays slowly emitting small waves. In figures <ref>–<ref> we give the space-time pictures of the kinetic, potential, and gradient energy densities.From figure <ref> we see that the kinetic energy density of the moving kinks before the collision is small compared with the amplitude values of the subsequent kinetic energy oscillations in the bound state. The gradient energy density behaves differently, see figure <ref>:it decreases noticeably after the formation of the bound state. In figure <ref> we show the potential energy density. It can be seen that it also oscillates but the amplitude falls off more slowly than in the previous case.The space-time picture of the field gradient is shown in figure <ref>. In terms of the elastic strain, the kink and the antikink resemble a wave of compression and a wave of decompression, travelling towards each other. A localized oscillating structure is formed after the collision, see figure <ref>.From the numerical analysis we obtain the following maximal values of the energy densities:k_max^(2)≈ 0.25,u_max^(2)≈ 0.075,p_max^(2)≈ 0.075, ε_max^(2)≈ 0.25.For the field gradient we founde_min^(2)≈ -0.4,e_max^(2)≈ 0.4. We have also performed the numerical simulation of the collision of the kinks (0,-1) and (-1,0) with the same initial velocities and initial positions, observing the same maximal values of the energy densities and the field gradient.§.§.§ The configuration (1,0,1) In this case, we use the initial configuration ϕ_(1,0,1)(x,t) = ϕ_(1,0)(x-x_1,t)+ϕ_(0,1)(x-x_2,t)in order to study the collisions of the kinks (1,0) and (0,1). We use the same values of x_1, x_2, v_1, and v_2 as in the previous subsection. The critical velocity value in this case is smaller than in the previous configuration, namely, v_cr≈ 0.045 <cit.>. The initial velocity of the kinks in our numerical experiment is more than the critical value. This means that the kink and the antikink collide and escape from each other after the collision, i.e. we observe the “reaction” K̅K→K̅K.In figure <ref> we give the results of our numerical simulation.Figure <ref> shows the field profile before, during, and after the collision, illustrating their convergence, interaction during the collision, and escape. Moreover, figure <ref> shows that the kink and the antikink attract each other at small distances.From the numerical analysis we obtain the following maximal values of the energy densities: k_max^(2)≈ 0.37,u_max^(2)≈ 0.07,p_max^(2)≈ 0.34, ε_max^(2)≈ 0.37. For the field gradient we have e_min^(2)≈ -0.4,e_max^(2)≈ 0.4. We see that the maximal values of the energy densities in the case of the configuration (1,0,1) differ from those found in the case of the configuration (0,1,0), while the extreme values of the field gradient are the same. The latter fact is explained by noticing that the maximal absolute values of the field gradient are observed in running non-interacting kinks. They can be calculated analytically from eq. (<ref>), which gives the value e_max^(1)≈ 0.3868 for the single kink moving with the velocity v=0.1. We have also carried out a numerical simulation of the kink-antikink collision for the configuration (-1,0,-1), and obtained the same results as for (1,0,1). §.§ Collision of three kinks §.§.§ The configuration (0,1,0,1) Our next step is to study collisions of three kinks at the same point. We start with the configuration of the type (0,1,0,1): ϕ_(0,1,0,1)(x,t) = ϕ_(0,1)(x-x_1,t)+ϕ_(1,0)(x-x_2,t)+ϕ_(0,1)(x-x_3,t)-1. This configuration is composed of two kinks (0,1) and one antikink (1,0), which is placed between the kinks. We set the antikink to be static, v_2=0, while the kinks are moving towards it from the left and from the right with velocities v_1=0.1 and v_3=-0.1, respectively. The initial positions of the kinks are x_1=-10 and x_3=10. It turns out that, in order to ensure the simultaneous arrival of the two kinks at the location of the antikink, the latter has to be slightly shifted towards the left kink because of the asymmetry of the ϕ^6 kinks. We found that the antikink must be placed at x_2=-2.05954 in order to obtain the maximal energy densities during the collision.In figure <ref> we give the results of our numerical simulation of the kink-antikink-kink collision in the sector (0,1,0,1).The kink (0,1) that started at x=x_1 and the antikink (1,0), which are initially parts of the configuration of the type (0,1,0), annihilate. They form an oscillating lump, which moves with a high velocity away from the collision point and quickly decays. At the same time, the other kink (0,1), which originally started from the point x_3, survives and moves backwards after the collision. So we observe the “reaction” KK̅K→ bK.It is interesting to notice that the velocities of the lump and the kink after the collision substantially exceed the initial velocities of the colliding kinks. Apparently it is a consequence of redistribution of energy. When the kink and the antikink annihilate, a part of their energy is transferred to the kinetic energy of the surviving kink, increasing its speed.We obtain the following extreme values of the energy densities and the field gradient: k_max^(3)≈ 0.27,u_max^(3)≈ 0.67,p_max^(3)≈ 0.075, ε_max^(3)≈ 0.75, and e_min^(3)≈ -0.58,e_max^(3)≈ 1.15.We have also carried out a numerical simulation of the antikink-kink-antikink collision for the configuration (0,-1,0,-1) and obtained the same results as for (0,1,0,1).§.§.§ The configuration (1,0,1,0) As in the previous subsection, we consider here a collision of three kinks, but with the initial configuration of the type (1,0,1,0), which corresponds to two antikinks (1,0) and one kink (0,1) between them, i.e. K̅KK̅. For the numerical simulation we use the following initial condition: ϕ_(1,0,1,0)(x,t) = ϕ_(1,0)(x-x_1,t)+ϕ_(0,1)(x-x_2,t)+ϕ_(1,0)(x-x_3,t)-1. The antikinks are initially placed at x_1=-10 and x_3=10, and are moving towards each other with velocities v_1=0.1 and v_3=-0.1, while the static kink is initially located at x_2=2.06000, see figure <ref>. As in the previous case, the position x_2 of the central kink ensures that both antikinks arrive at the location of the kink at the same time.The results of the numerical simulation are shown in figure <ref>.The collision pattern is quite similar to that of the configuration (0,1,0,1). We observe the annihilation of the kink and one of the antikinks, which are initially parts of the configuration (0,1,0). After annihilation they form an oscillating lump, which escapes from the collision point with a near-light speed. The other antikink (1,0), which survives in the collision, also escapes with a near-light speed — its final velocity substantially exceeds the initial velocity. The observed antikink-kink-antikink collision is the “reaction” K̅KK̅→K̅b.From the numerical analysis we extract the following extreme values: k_max^(3)≈ 0.27,u_max^(3)≈ 0.67,p_max^(3)≈ 0.075, ε_max^(3)≈ 0.75, and e_min^(3)≈ -1.15,e_max^(3)≈ 0.57. From this results we see that the maximal values of the energy densities are the same as for the configuration (0,1,0,1), while the extreme values of the field gradient are different.We have also carried out numerical simulation of the kink-antikink-kink collision for the configuration (-1,0,-1,0) and obtained the same results as for (1,0,1,0). §.§ Collision of four kinks§.§.§ The configuration (0,1,0,1,0) We move on to consider collisions of four kinks and antikinks. We start with the following initial configuration: ϕ_(0,1,0,1,0)(x,t) = ϕ_(0,1)(x-x_1,t)+ϕ_(1,0)(x-x_2,t)+ϕ_(0,1)(x-x_3,t)+ϕ_(1,0)(x-x_4,t)-2, where x_1=-x_4=-10.17604, v_1=-v_4=0.1, x_2=-x_3=-5.0, v_2=-v_3=0.05. At these initial conditions the collision of all four solitons occurs at the same point, see figure <ref>.After the collision we observe the formation of a bion at the origin (the collision point), and an antikink (0,-1) with a kink (-1,0), which are moving with constant velocities in the opposite directions away from the collision point. So we have a process of the type KK̅KK̅→K̅bK. Of course, some energy in the form of waves of small amplitude is emitted during the collision. The observed reaction can be interpreted as follows. One kink-antikink pair forms a bound state (bion), while the other pair scatters, and in the final state we observe a configuration of the type (0,-1,0) with the bion at its center, see figure <ref>.We found the following extreme values of the energy densities and the field gradient for this collision of four kinks: k_max^(4)≈ 0.95,u_max^(4)≈ 0.3,p_max^(4)≈ 0.075, ε_max^(4)≈ 0.95, and e_min^(4)≈ -0.77,e_max^(4)≈ 0.77. §.§.§ The configuration (1,0,1,0,1) Finally, we consider the initial configuration of the type (1,0,1,0,1), i.e., the K̅KK̅K system: ϕ_(1,0,1,0,1)(x,t) = ϕ_(1,0)(x-x_1,t)+ϕ_(0,1)(x-x_2,t)+ϕ_(1,0)(x-x_3,t)+ϕ_(0,1)(x-x_4,t)-1, where x_1=-x_4=-17.1375468, v_1=-v_4=0.1, x_2=-x_3=-5.0, v_2=-v_3=0.05. As in the previous cases, we use specially chosen initial positions and initial velocities of the kinks and antikinks in order to force them all to collide at the same point.The results of the numerical simulation are shown in figure <ref>.In particular, from figures <ref> and <ref> it is clear that two KK̅ bound states (bions) are formed after the collision, so we have a process of the type K̅KK̅K→ bb. The bions escape from the collision point with velocities that substantially exceed the initial velocities of the colliding kinks. Notice that the situation is different from that we observed in the case of the initial configuration (0,1,0,1,0), described in the previous subsection.For the extreme values of the energy densities and the field gradient we find: k_max^(4)≈ 1.47,u_max^(4)≈ 0.42,p_max^(4)≈ 0.78, ε_max^(4)≈ 1.47, and e_min^(4)≈ -0.91,e_max^(4)≈ 0.91. § CONCLUSION We have studied the process of collision of several ϕ^6 kinks and antikinks at the same point. We used the initial configurations of the following types: KK̅, K̅K, KK̅K, K̅KK̅, KK̅KK̅, and K̅KK̅K. In all these cases the initial positions and initial velocities were fitted so as to achieve the simultaneous collision of all solitons at the same point. For each initial configuration we restricted ourselves to only one set of the initial data. The results are collected in tables <ref> and <ref>.In table <ref> we give the initial velocities of the kinks and antikinks together with the final velocities of the quasiparticles for all collisions discussed in this work. Depending on the number of the kinks and on their location order in the initial configuration, we observed reflection, passing through each other, and capture. Apparently, a particular final state configuration is a consequence of the rather complicated picture of the pairwise kink-antikink interactions: the two solitons can form a bound state or reflect off each other, depending on the value of their initial velocities. The critical velocities that separate these two regimes also depend on the type of the initial configuration, KK̅ or K̅K, because the kinks of the ϕ^6 model are not symmetric: they have different spatial asymptotics depending on the vacuum to which the field ϕ tends, see eqs. (<ref>), (<ref>).In the case of the kink-antikink collisions (N=2) we reproduced the well-known scenarios. In the KK̅ collision [the initial configuration of the type (0,1,0)] at v_in=0.1 we observed the capture of the kink and the antikink and the formation of a bion — a kink-antikink bound state. This happens because v_in<v_cr≈ 0.289. At the same time, in the K̅K collision [the initial configuration of the type (1,0,1)] at v_in=0.1 we could see the two solitons to escape after the collision. Such behavior is a consequence of the fact that now v_in>v_cr≈ 0.045.Next, we studied collisions of three solitons, namely two kinks and one antikink [the initial configuration of the type(0,1,0,1), or KK̅K] and of two antikinks and one kink [the initial configuration of the type (1,0,1,0), or K̅KK̅]. The observed final states in these cases seem to be due to the kink-antikink capture at low energies, KK̅→ b, see table <ref>.In the case N=4 the situation is more complicated. On the one hand, in the process KK̅KK̅→K̅bK we observed two features: a) the formation of a bion, and b) a kink and an antikink passing through each other and escaping to infinities. On the other hand, in the process K̅KK̅K→ b b we have two bions in the final state, moving with high velocities in the opposite directions from the collision point. Thus in the latter case we observe annihilation of all four solitons.The extreme values of the energy densities and the field gradient are presented in table <ref>. Recall that ε_max, k_max, p_max, and u_max are the maximal densities of the total, kinetic, potential, and gradient energy, respectively, while e_min and e_max are the minimal and maximal values of the field gradient. The values in the first line of table <ref> were calculated for a single kink with the help of analytic expressions (<ref>) and (<ref>) for the kink velocity v=0.1 (the same velocity as in the simulations of soliton collisions). We see that the maximal energy density for a single kink is 0.15 and that it is nearly equally shared between the potential and the gradient energy, while the maximal kinetic energy density is rather small at this velocity. A single kink produces the maximal tensile strain of 0.4, while the antikink yields the maximal compressive (negative) strain of the same magnitude. In the KK̅ collisions, the maximal energy density is 0.25 and it is in the form of the kinetic energy density. The maximal (minimal) field gradient is the same as for a single kink (antikink). Due to the asymmetry of the ϕ^6 kinks, the K̅K collisions produce somewhat higher maximal total energy density of 0.37, also in the form of the kinetic energy density. The extreme values of the field gradient are the same as for a single kink (antikink). In three-kink collisions the maximal energy density rises up to 0.75, which is five times larger than that of a single kink. The maximal and minimal values of the field gradient are 1.15 and -1.15, respectively, which is nearly three times larger than in a single kink. In the four-kink collision KK̅KK̅ the maximal energy density is 6.3 times larger than in a single kink. Strikingly, in the case of the K̅KK̅K collision the maximal energy density is almost 10 times larger than in a single kink, and it is in the form of the kinetic energy. The extreme values of the field gradient in the four-kink collisions are roughly two times greater than in the case of a single kink. We thus conclude that, in multi-kink collisions, very high energy density spots can be observed. Notice that the maximum value of the potential energy density p_max=0.12 in the process KK̅KK̅→K̅bK we observe in the bion oscillations after the kinks' collision, see figure <ref>.In conclusion, we emphasize that this work opens wide prospects for future research. In particular, it would be interesting to study multi-kink collisions within the ϕ^8 model <cit.>. Depending on the parameters of the model, the ϕ^8 kinks can have vibrational modes. These modes, in turn, can affect the energy redistribution in the multi-kink collisions. Besides that, the kinks of the ϕ^8 model, corresponding to particular choices of the parameters, can have power-law asymptotics, which leads to a long-range interaction between kink and antikink. Therefore the multi-kink scattering can have new interesting features.We would also like to notice that the multi-kink collisions can produce quasiparticles, which have very high speed. These quasiparticles of the “second generation” can, in turn, be forced to collide at the same point. Study of such processes can be a subject of future research.§ ACKNOWLEDGMENTS This research was supported by the MEPhI Academic Excellence Project (contract No. 02.a03.21.0005, 27.08.2013). S.V.D. thanks the Russian Science Foundation for their financial support under the grant N 16-12-10175. D.S. is grateful for the partial financial support provided by the Russian Science Foundation under the grant N 14-13-00982.99vilenkin01 A. Vilenkin and E.P.S. Shellard, Cosmic Strings and Other Topological Defects, Cambridge University Press, Cambridge U.K. (2000).manton01 N. Manton and P. Sutcliffe, Topological Solitons, Cambridge University Press, Cambridge U.K. (2004).aek01 T.I. Belova and A.E. Kudryavtsev, Solitons and their interactions in classical field theory, https://doi.org/10.1070/PU1997v040n04ABEH000227Phys. Usp. 40 (1997) 359 [https://doi.org/10.3367/UFNr.0167.199704b.0377Usp. Fiz. Nauk 167 (1997) 377].nitta1 M. Nitta, Josephson vortices and the Atiyah-Manton construction, https://doi.org/10.1103/PhysRevD.86.125004Phys. Rev. D 86 (2012) 125004 [https://arxiv.org/abs/1207.6958arXiv:1207.6958].nitta2 M. Nitta, Correspondence between Skyrmions in 2+1 and 3+1 dimensions, https://doi.org/10.1103/PhysRevD.87.025013Phys. Rev. D 87 (2013) 025013 [https://arxiv.org/abs/1210.2233arXiv:1210.2233].kur01 E. Kurianovych and M. Shifman, Non-Abelian moduli on domain walls, https://doi.org/10.1142/S0217751X14501930Int. J. Mod. Phys. A 29 (2014) 1450193 [https://arxiv.org/abs/1407.7144arXiv:1407.7144].blyankinshtein N. Blyankinshtein, Q-lumps on a domain wall with a spin-orbit interaction, https://doi.org/10.1103/PhysRevD.93.065030Phys. Rev. D 93 (2016) 065030 [https://arxiv.org/abs/1510.07935arXiv:1510.07935].kur02 V. Bychkov, M. Kreshchuk and E. Kurianovych, More about structures localized on domain walls: strings, skyrmions, analytic solutions for orientational moduli, symmetry analysis, https://arxiv.org/abs/1603.06310arXiv:1603.06310.nitta3 M. Nitta, Matryoshka Skyrmions, https://doi.org/10.1016/j.nuclphysb.2013.03.003Nucl. Phys. B 872 (2013) 62 [https://arxiv.org/abs/1211.4916arXiv:1211.4916].nitta4 M. Kobayashi and M. Nitta, Sine-Gordon kinks on a domain wall ring, https://doi.org/10.1103/PhysRevD.87.085003Phys. Rev. D 87 (2013) 085003 [https://arxiv.org/abs/1302.0989arXiv:1302.0989].jennings P. Jennings and P. Sutcliffe, The dynamics of domain wall Skyrmions, https://doi.org/10.1088/1751-8113/46/46/465401J. Phys. A 46 (2013) 465401 [https://arxiv.org/abs/1305.2869arXiv:1305.2869].nitta5 S.B. Gudnason and M. Nitta, Domain wall Skyrmions, https://doi.org/10.1103/PhysRevD.89.085022Phys. Rev. D 89 (2014) 085022 [https://arxiv.org/abs/1403.1245arXiv:1403.1245].GaLiRa V.A. Gani, M.A. Lizunova and R.V. Radomskiy Scalar triplet on a domain wall: an exact solution, https://doi.org/10.1007/JHEP04(2016)043JHEP 04 (2016) 043 [https://arxiv.org/abs/1601.07954arXiv:1601.07954].GaLiRaconf V.A. Gani, M.A. Lizunova and R.V. Radomskiy, Scalar triplet on a domain wall, https://doi.org/10.1088/1742-6596/675/1/012020J. Phys.: Conf. Ser. 675 (2016) 012020 [https://arxiv.org/abs/1602.04446arXiv:1602.04446].nitta6 M. Nitta, Non-Abelian sine-Gordon solitons, https://doi.org/10.1016/j.nuclphysb.2015.04.006Nucl. Phys. B 895 (2015) 288 [https://arxiv.org/abs/1412.8276arXiv:1412.8276].GaKiRu V.A. Gani, A.A. Kirillov and S.G. Rubin, Classical transitions with the topological number changing in the early Universe, https://arxiv.org/abs/1704.03688arXiv:1704.03688.lensky V.A. Lensky, V.A. Gani and A.E. Kudryavtsev, Domain walls carrying a U(1) charge, https://doi.org/10.1134/1.1420436Sov. Phys. JETP 93 (2001) 677 [Zh. Eksp. Teor. Fiz. 120 (2001) 778] [https://arxiv.org/abs/hep-th/0104266hep-th/0104266].GaKsKu01 V.A. Gani, V.G. Ksenzov and A.E. Kudryavtsev, Example of a self-consistent solution for a fermion on domain wall, https://doi.org/10.1134/S1063778810110104Phys. Atom. Nucl. 73 (2010) 1889 [Yad. Fiz. 73 (2010) 1940] [https://arxiv.org/abs/1001.3305arXiv:1001.3305].GaKsKu02 V.A. Gani, V.G. Ksenzov and A.E. Kudryavtsev, Stable branches of a solution for a fermion on domain wall, https://doi.org/10.1134/S1063778811050085Phys. Atom. Nucl. 74 (2011) 771 [Yad. Fiz. 74 (2011) 797] [https://arxiv.org/abs/1009.4370arXiv:1009.4370].Kudryavtsev1975 A.E. Kudryavtsev, Solitonlike solutions for a Higgs scalar field, http://www.jetpletters.ac.ru/ps/1522/article_23290.shtmlJETP Lett. 22 (1975) 82 [http://www.jetpletters.ac.ru/ps/528/article_8373.shtmlPis'ma v ZhETF 22 (1975) 178].Anninos1991 P. Anninos, S. Oliveira and R.A. Matzner, Fractal structure in the scalar λ(φ^2-1)^2 theory, https://doi.org/10.1103/PhysRevD.44.1147 Phys. Rev. D 44 (1991) 1147.Goodman2007 R.H. Goodman and R. Haberman, Chaotic scattering and the n-bounce resonance in solitary-wave interactions, https://doi.org/10.1103/PhysRevLett.98.104103 Phys. Rev. Lett. 98 (2007) 104103 [https://arxiv.org/abs/nlin/0702048nlin/0702048].Campbell1983 D.K. Campbell, J.F. Schonfeld and C.A. Wingate, Resonance structure in kink-antikink interactions in φ^4 theory, https://doi.org/10.1016/0167-2789(83)90289-0 Physica D 9 (1983) 1.Peyrard1983 M. Peyrard and D.K. Campbell, Kink-antikink interactions in a modified sine-Gordon model, https://doi.org/10.1016/0167-2789(83)90290-7 Physica D 9 (1983) 33.Campbell1986 D.K. Campbell, Solitary wave collisions revisited, https://doi.org/10.1016/0167-2789(86)90161-2 Physica D 18 (1986) 47.dorey P. Dorey, K. Mersh, T. Romanczukiewicz and Y. Shnir, Kink-Antikink Collisions in the ϕ^6 Model, https://doi.org/10.1103/PhysRevLett.107.091602Phys. Rev. Lett. 107 (2011) 091602 [https://arxiv.org/abs/1101.5951arXiv:1101.5951].GaKuLi V.A. Gani, A.E. Kudryavtsev and M.A. Lizunova, Kink interactions in the (1+1)-dimensional φ^6 model, https://doi.org/10.1103/PhysRevD.89.125009Phys. Rev. D 89 (2014) 125009 [https://arxiv.org/abs/1402.5903arXiv:1402.5903].GaKuPRE V.A. Gani and A.E. Kudryavtsev, Kink-antikink interactions in the double sine-Gordon equation and the problem of resonance frequencies, https://doi.org/10.1103/PhysRevE.60.3305Phys. Rev. E 60 (1999) 3305 [https://arxiv.org/abs/cond-mat/9809015cond-mat/9809015].oliveira01 T.S. Mendonça and H.P. de Oliveira, The collision of two-kinks defects, https://doi.org/10.1007/JHEP09(2015)120JHEP 09 (2015) 120 [https://arxiv.org/abs/1502.03870arXiv:1502.03870].oliveira02 T.S. Mendonça and H.P. de Oliveira, A note about a new class of two-kinks, https://doi.org/10.1007/JHEP06(2015)133JHEP 06 (2015) 133 [https://arxiv.org/abs/1504.07315arXiv:1504.07315].krusch01 S.W. Goatham, L.E. Mannering, R. Hann and S. Krusch, Dynamics of Multi-kinks in the Presence of Wells and Barriers, https://doi.org/10.5506/APhysPolB.42.2087Acta Phys. Polon. B 42 (2011) 2087 [https://arxiv.org/abs/1007.2641arXiv:1007.2641].saad01 D. Saadatmand, S.V. Dmitriev, D.I. Borisov and P.G. Kevrekidis, Interaction of sine-Gordon kinks and breathers with a parity-time-symmetric defect, https://doi.org/10.1103/PhysRevE.90.052902Phys. Rev. E 90 (2014) 052902 [https://arxiv.org/abs/1408.2358arXiv:1408.2358].saad02 D. Saadatmand et al., Effect of the ϕ^4 kink's internal mode at scattering on a PT-symmetric defect, http://www.jetpletters.ac.ru/ps/2075/article_31231.shtmlPisma Zh. Eksp. Teor. Fiz. 101 (2015) 550 [https://doi.org/10.1134/S0021364015070140JETP Lett. 101 (2015) 497].saad03 D. Saadatmand et al., Kink scattering from a parity-time-symmetric defect in the ϕ^4 model, https://doi.org/10.1016/j.cnsns.2015.05.012Commun. Nonlinear Sci. Numer. Simulat. 29 (2015) 267 [https://arxiv.org/abs/1411.5857arXiv:1411.5857].rad1 S.V. Dmitriev, Y.S. Kivshar and T. Shigenari, Fractal structures and multiparticle effects in soliton scattering, https://doi.org/10.1103/PhysRevE.64.056613 Phys. Rev. E 64 (2001) 056613.rad2 S.V. Dmitriev, P.G. Kevrekidis and Y.S. Kivshar, Radiationless energy exchange in three-soliton collisions, https://doi.org/10.1103/PhysRevE.78.046604Phys. Rev. E 78 (2008) 046604 [https://arxiv.org/abs/0806.1152arXiv:0806.1152].saad.arXiv.2016.08 A. Askari, D. Saadatmand, S.V. Dmitriev and K. Javidan, High energy density spots and production of kink-antikink pairs in particle collisions, https://arxiv.org/abs/1608.01847arXiv:1608.01847.Deformed F.C. Simas, A.R. Gomes, K.Z. Nobrega and J.C.R.E. Oliveira, Suppression of two-bounce windows in kink-antikink collisions, https://doi.org/10.1007/JHEP09(2016)104JHEP 09 (2016) 104 [https://arxiv.org/abs/1605.05344arXiv:1605.05344].Ahlqvist:2014uha P. Ahlqvist, K. Eckerle and B. Greene, Kink Collisions in Curved Field Space, https://doi.org/10.1007/JHEP04(2015)059JHEP 04 (2015) 059 [https://arxiv.org/abs/1411.4631arXiv:1411.4631].Mohammadi M. Mohammadi and N. Riazi, Bi-dimensional soliton-like solutions of the nonlinear complex sine-Gordon system, https://doi.org/10.1093/ptep/ptu002Prog. Theor. Exp. Phys. (2014) 023A03.weigel01 H. Weigel, Kink-Antikink Scattering in φ^4 and ϕ^6 Models, https://doi.org/10.1088/1742-6596/482/1/012045J. Phys.: Conf. Ser. 482 (2014) 012045 [https://arxiv.org/abs/1309.6607arXiv:1309.6607].weigel02 I. Takyi and H. Weigel, Collective coordinates in one-dimensional soliton models revisited, https://doi.org/10.1103/PhysRevD.94.085008Phys. Rev. D 94 (2016) 085008 [https://arxiv.org/abs/1609.06833arXiv:1609.06833].baron01 H.E. Baron, G. Luchini and W.J. Zakrzewski, Collective coordinate approximation to the scattering of solitons in the (1+1) dimensional NLS model, https://doi.org/10.1088/1751-8113/47/26/265201J. Phys. A: Math. and Theor. 47 (2014) 265201 [https://arxiv.org/abs/1308.4072arXiv:1308.4072].javidan K. Javidan, Collective coordinate variable for soliton-potential system in sine-Gordon model, https://doi.org/10.1063/1.3511337J. Math. Phys. 51 (2010) 112902 [https://arxiv.org/abs/0910.3058arXiv:0910.3058].christov01 I. Christov and C.I. Christov, Physical dynamics of quasi-particles in nonlinear wave equations, https://doi.org/10.1016/j.physleta.2007.08.038Phys. Lett. A 372 (2008) 841 [https://arxiv.org/abs/nlin/0612005nlin/0612005].GaKu V.A. Gani and A.E. Kudryavtsev, Collisions of domain walls in a supersymmetric model, https://doi.org/10.1134/1.1423755Phys. Atom. Nucl. 64 (2001) 2043 [Yad. Fiz. 64 (2001) 2130] [https://arxiv.org/abs/hep-th/9904209hep-th/9904209, https://arxiv.org/abs/hep-th/9912211hep-th/9912211].manton_npb N.S. Manton, An effective Lagrangian for solitons, https://doi.org/10.1016/0550-3213(79)90309-2Nucl. Phys. B 150 (1979) 397.kks04 P.G. Kevrekidis, A. Khare and A. Saxena, Solitary wave interactions in dispersive equations using Manton's approach, https://doi.org/10.1103/PhysRevE.70.057603Phys. Rev. E 70 (2004) 057603 [https://arxiv.org/abs/nlin/0410045arXiv:nlin/0410045].Radomskiy R.V. Radomskiy, E.V. Mrozovskaya, V.A. Gani and I.C. Christov, Topological defects with power-law tails, https://doi.org/10.1088/1742-6596/798/1/012087J. Phys.: Conf. Ser. 798 (2017) 012087 [https://arxiv.org/abs/1611.05634arXiv:1611.05634].Aliakbar A. Moradi Marjaneh, D. Saadatmand, Kun Zhou, S.V. Dmitriev and M.E. Zomorrodian, High energy density in the collision of N kinks in the ϕ^4 model, https://doi.org/10.1016/j.cnsns.2017.01.022Commun. Nonlinear Sci. Numer. Simulat. 49 (2017) 30 [https://arxiv.org/abs/1605.09767arXiv:1605.09767].saad.prd.2015 D. Saadatmand, S.V. Dmitriev and P.G. Kevrekidis, High energy density in multi-soliton collisions, https://doi.org/10.1103/PhysRevD.92.056005Phys. Rev. D 92 (2015) 056005 [https://arxiv.org/abs/1506.01389arXiv:1506.01389].lohe M.A. Lohe, Soliton structures in P(φ)_2, https://doi.org/10.1103/PhysRevD.20.3120Phys. Rev. D 20 (1979) 3120.khare A. Khare, I.C. Christov and A. Saxena, Successive phase transitions and kink solutions in ϕ^8, ϕ^10, and ϕ^12 field theories, https://doi.org/10.1103/PhysRevE.90.023208Phys. Rev. E 90 (2014) 023208 [https://arxiv.org/abs/1402.6766arXiv:1402.6766].GaLeLi V.A. Gani, V. Lensky and M.A. Lizunova, Kink excitation spectra in the (1+1)-dimensional φ^8 model, https://doi.org/10.1007/JHEP08(2015)147JHEP 08 (2015) 147 [https://arxiv.org/abs/1506.02313arXiv:1506.02313].GaLeLiconf V.A. Gani, V. Lensky, M.A. Lizunova and E.V. Mrozovskaya, Excitation spectra of solitary waves in scalar field models with polynomial self-interaction, https://doi.org/10.1088/1742-6596/675/1/012019J. Phys.: Conf. Ser. 675 (2016) 012019[https://arxiv.org/abs/1602.02636arXiv:1602.02636].
http://arxiv.org/abs/1704.08353v2
{ "authors": [ "Aliakbar Moradi Marjaneh", "Vakhid A. Gani", "Danial Saadatmand", "Sergey V. Dmitriev", "Kurosh Javidan" ], "categories": [ "hep-th", "cond-mat.mtrl-sci", "math-ph", "math.MP", "nlin.PS" ], "primary_category": "hep-th", "published": "20170426211858", "title": "Multi-kink collisions in the $φ^6$ model" }
Basic Properties ofSingular Fractional Order System with order (1,2) Xiaogang Zhu, Jie Xu and Junguo Lu Jie Xu and Junguo Lu are with the School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, 200240 China Received; accepted ============================================================================================================================================================================================= This paper focuses on some properties, which include regularity, impulse, stability, admissibility and robust admissibility, of singular fractional order system (SFOS) with fractional order 1<α<2. The definitions of regularity, impulse-free, stability and admissibility are given in the paper. Regularity is analysed in time domainand the analysis of impulse-free is based on state response. A sufficient and necessary condition of stability is established.Three different sufficient and necessary conditions of admissibility are proved. Then, this paper shows how to get the numerical solution of SFOS in time domain. Finally, a numerical example is provided to illustrate the proposed conditions.§ INTRODUCTION Fractional order systems can describe the real physical systems better than integer order systems because the real objects are generally fractional. A lot of systems have been studied via fractional order systems, such as wavelet transform <cit.>, viscoelastic systems <cit.> and others (<cit.> ).Singular systems have been widely studied in many fields (<cit.>) because singular systems can describe real physical systems more directly than regular systems. However, very few researches have been studied on singular fractional order systems (SFOS), most of which are about stability. In <cit.>, a sufficient and necessary condition for regularity is given; Based on regularity and free impulse, this paper also gives a sufficient condition for stability. Sufficient and necessary conditions for regularity and admissibility with fractional order 0<α<1 are given in <cit.>, respectively. Some other papers study the stability of SFOS via linear matrix inequality (LMI) (<cit.>) and some study the stability of SFOS via transforming the SFOS into normal ones (<cit.>).However, non of them prove the regularity, free impulse and stability in time domain, which can prove these properties more directly. Moreover, to the best of our knowledge, there exists no research on free impulse and admissibility with fractional order 1<α<2. Therefore, in this paper we give the sufficient and necessary conditions of regularity, free impulse, stability and admissibility for SFOS with fractional order 1<α<2, respectively.This paper is organized as follows.In section II, the definition of Caputo's fractional derivative and SFOS are recalled. And some useful lemma are provided. In section III, regularity and impulse are analysed in time domain. In section IV, sufficient and necessary conditions of stability and admissibility are proved, respectively. In section V, sufficient conditions of robust admissibility are presented. Finally, in section VI, numerical solution and example are illustrated. Conclusion will be given in section VII. For a matrix A, its transpose and complex conjugate transpose are denoted by A^T and A^∗, respectively.ℂ _-={s| s∈ ℂ , Re(s)<0}. Sym(A) denotes A+A^∗. Denote pair (E_I,A_I) as the autonomous singular integer order system (SIOS) E_I.x(t)=A_Ix(t). Denote triplet (E,A,α) as the autonomous SFOS ED^αx( t)=Ax(t). The notation ∙ stands for the symmetric component in matrix. § PRELIMINARIES In this paper, we use the Caputo's fractional derivative, of which the Laplace transform allows utilization of initial values. The Caputo's fractional derivative is defined as <cit.> _aD_t^αf(t)=1/Γ(α-n)∫_a^tf^(n)(τ)dτ/(t-τ)^α+1-nwhere n is an integer satisfying 0≤ n-1<α<n; Γ(·) is the Gamma function which is defined as Γ(z)=∫_0^∞e^-tt^z-1dtIn the following of the paper, _aD_t^α is denoted by D^α.A two-parameter function of the Mittag-Leffler type is defined as <cit.> E_α,β(z)=k=0∞ ∑ z^k/Γ(α k+β)where α>0,β>0.And δ^(-β)(t) means δ^(-β)(t)={[ t^β-1/Γ(β);0 ] [ t>0; t<0 ] [ β∈ ℝ ].whose Laplace transform is L[δ^-α(t)]=s^-α, Re(s)>0Consider the singular fractional order system (SFOS){c]c ED^αx(t)=Ax(t)+Bu(t) y(t)=Cx(t)+Du(t).where x(t)∈ℝ^n is the state of the system composed of state variables; u(t)∈ℝ^p is the control input; y(t)∈ℝ^q is the measure output; E,A∈ ℝ ^n× n; B,C,D are constant matrices with appropriate dimensions; D^α represents the Caputo fractional derivative; α is the order of the SFOS and 1<α<2.The finite eigenvalues of SFOS isλ(E,A)={s| s∈ℂ ,| s| <∞,(sE-A)=0}The finite pole set for the system is σ(E,A)={s| s∈ ℂ ,| s| <∞, (s^αE-A)=0, 0<α<2}and σ(I,A) will be specified as σ(A). Obiviously, λ(E,A)=σ^α(E,A).The following lemmas and definitions will be useful.<cit.> For any two matrices E,A∈ ℝ ^m× n, there always exist two nonsingular matrices Q,P such thatc]c E≜ QEP=diag(0,L_1,L_2,...,L_p ,L_1^^',L_2^^',...L_q^^',I,N) A≜ QAP=diag(0,J_1,J_2,...,J_p ,J_1^^',J_2^^',...J_q^^',A_1,I)where0∈ ℝ ^m_0× n_0,A_1∈ ℝ ^h× h L_i=[ [ 1 0; 1 0; ⋱ ⋱; 1 0 ]],J_i=[ [ 0 1; 0 1; ⋱ ⋱; 0 1 ]]∈ ℝ ^m_i×(m_i+1)i=1,2,...,p L_j^^'=[ [ 1; 0 1; 0 ⋱; ⋱ 1; 0 ]],J_j^^'=[ [ 0; 1 0; 1 ⋱; ⋱ 0; 1 ]]∈ ℝ ^(n_j+1)× n_jj=1,2,...,q N=diag(N_k_1,N_k_2,...,N_k_r)∈ ℝ ^g∗ g N_k_s=[ [ 0 1; 0 ⋱; ⋱ 1; 0 ]]∈ ℝ ^k_s× k_s, s=1,2,...,r m_0+i∑m_i+j∑(n_j+1)+s∑k_s+h=m n_0+j∑n_j+i∑(m_i+1)+s∑k_s+h=n s∑k_s=g Consider the following initial-value problem:_0D_t^σ_ny(t)+n-1j=1∑p_j (t)_0D_t^σ_n-jy(t)+p_n(t)y(t)=f(t) (0<t<T<∞) [_0D_t^σ_k-1y(t)]_t=0=b_k, k=1,2,...,nwhere_aD_t^σ_k≡_aD_t^α_k_aD_t^α_k-1 ..._aD_t^α_1 _aD_t^σ_k-1≡_aD_t^α_k-1_aD_t ^α_k-1..._aD_t^α_1 σ_k=j=1k∑α_j,(k=1,2,...,n) 0<α_j≤1,(j=1,2,...,n)and f(t)∈ L_1(0,T), i.e.∫_0^T| f(t)| dt<∞<cit.> If f(t)∈ L_1(0,T), and p_j(t) (j=1,2,...,n) are continuous functions in the closed interval [0,T], then the initial-value problem (<ref>)-(<ref>) has a unique solution y(t)∈ L_1(0,T). <cit.> A subset 𝒟 of the complexplane is called an LMI region if there exist a symmetric matrix Φ∈ ℝ ^d× d and a matrix Ψ∈ ℝ ^d× d such that𝒟={z∈ ℂ | f_𝒟(z)<0}where f_𝒟(z)= Φ+zΨ+z̅Ψ^T and "<" stands for negative definite. When Φ=0, the LMI region is denoted by 𝒟 _Γ.<cit.> If all the eigenvalues of A∈ ℝ ^n× n take values in region 𝒟, i.e. λ (A)⊂𝒟, then A is called 𝒟-stable.<cit.> Matrix A is 𝒟-stable if and only if there exists a symmetric real matrix X>0 such thatM_𝒟(A,X)=Φ⊗ X+Ψ⊗( XA)+Ψ^T⊗(AX)^T<0 <cit.> System D^αx(t)=Ax(t)+Bu(t) with fractional order 1<α<2 is asymptotically stable if and only if there there exists a matrix P>0,P∈ℝ^n× n such thatSym{Θ⊗(AP)}<0where Θ=[ [sinπ/2α -cosπ/2α;cosπ/2αsinπ/2α ]] <cit.> Let X,Y,Λ be real matrices of suitable dimensions and Λ>0, thenX^TY+Y^TX≤ X^TΛ X+Y^TΛ YFor system (E,A,α), the infinite eigenvectors υ, which are related to eigenvalue 0, are defined as follows(1) The infinite eigenvector of order 1 satisfies Eυ^1 =0, υ^0=0;(2) The infinite eigenvector of order k satisfies Eυ^k =Aυ^k-1, k>1. The infinite eigenvector isn't related to the index α, which implies it may have the same properties as the infinite eigenvectors of SIOS. § SOLUTION OF SFOS§.§ Regularity of SFOS The sufficient and necessary condition of regularity for SFOS have already been given in <cit.>. But the systems in <cit.> are a linear SFOS and the fractional order in <cit.> is 0<α<1. In the following, a different definition of regularity is proposed. And based on this definition, we give a sufficient and necessary condition of regularity for nonlinear SFOS with fractional order 1<α<2.Let Bu(t)=g(t), then the system (<ref>) can be rewrite asED^αx(t)=Ax(t)+g(t)where g(t) is nonlinear and assumed to be sufficiently differential; 1<α<2. We will focus on the existence, uniqueness of (<ref>). If a SFOS has a unique solution, then the system is termed regular. System (<ref>) is regular if and only if two nonsingular matrices Q and P may be chosen such that[ QEP=diag(I_n_1,N); QAP=diag(A_1,I_n_2) ]where A_1∈ ℝ ^n_1× n_1; N∈ ℝ ^n_2× n_2 is nilpotent; n_1+n_2=n.According to lemma <ref>, let x(t)=P∼x(t) and left multiply system (<ref>) by a nonsingular Q. Let ∼g(t)=Qg(t), we get0D^αx_n_0(t)=g_m_0(t) L_iD^αx_m_i+1(t)=J_ix_m_i+1(t)+g_m_i(t),i=1,2,...,p L_j^^'D^αx_n_j(t)=J_j^^'x_n_j (t)+g_n_j+1(t),j=1,2,...,q N_k_sD^αx_k_s(t)=x_k_s(t)+g_k_s(t),s=1,2,...,r D^αx_h(t)=A_1x_h(t)+g_h(t)where L_i,L_j^^',J_j,J_j^^' are defined in (<ref>) andx_k(t)∈ ℝ ^k,g_k(t)∈ ℝ ^k,∼x^T(t)=[x_n_0^T,x_m_1+1^T,⋯ ,x_m_p+1^T,x_n_1^T,⋯,x_n_q^T,x_k_1^T ,⋯,x_k_r^T,x_h^T] ∼g^T(t)=[g_m_0^T,g_m_1^T,⋯,g_m_p ^T,g_n_1+1^T,⋯,g_n_q+1^T,g_k_1^T,⋯,g_k_r ^T,g_h^T]System (<ref>)-(<ref>) is equivalent to system (<ref>), thus we focus on the existence, uniqueness of system (<ref>)-(<ref>).(1) If equation (<ref>) can be solved, then g_m_0(t)=0 must be true. In this case, equation (<ref>) is always true. Therefore, this equation has either no solution or an infinite number of solutions.(2) Equation (<ref>) is composed of a set of equations{[ D^αz_1(t)=z_2(t)+g_1( t); D^αz_2(t)=z_3(t)+g_2( t);⋯; D^αz_k-1(t)=z_k(t)+g_k-1( t) ] .According to lemma <ref>, for a certain z_k(t), z_1(t),z_2(t),...,z_k-1(t) can be determined successively. Therefore, such equations have an infinite number of solutions.(3) Rewrite equation (<ref>) as{[ D^αz_1(t)=g_1(t); D^αz_2(t)=z_1(t)+g_2( t);⋯; D^αz_k(t)=z_k-1(t)+g_k( t);0=z_k(t)+g_k+1(t) ].Except the last equation, z_1(t),z_2(t) ,...,z_k(t) can be determined uniquely according to lemma <ref>. However, z_k(t) must satisfy the last euqation, which means these euqations have no solution unless g_k+1( t) satisfies the consistent condition z_k(t) +g_k+1(t)=0.(4) Expand equation (<ref>) into the following form{[ D^αz_2(t)=z_1(t)+g_1( t); D^αz_3(t)=z_2(t)+g_2( t);⋯; D^αz_k(t)=z_k-1(t)+g_k-1( t);0=z_k(t)+g_k(t) ].Beginning with the last equation, z_1(t),z_2( t),...,z_k(t) may be determined successively for sufficiently differentiable functions g_i(t) (i=1,2,...,k). Therefore, equation (<ref>) has a unique solution.(5) Equation (<ref>) is an ordinary fractional order differential equation, which has a unique solution since g(t) is sufficiently differential.To sum up, the system (<ref>) exists a solution and the solution is unique if and only if two nonsingular matrices Q and P may be chosen to satisfyQEP=diag(I_n_1,N) QAP=diag(A_1,I_n_2)where N=diag(N_k_1,N_k_2,...,N_k_r). The theorem is proved. §.§ State response and impulse analysis To the best of our knowledge, there exists no research which gives the entire state response for SFOS. The following gives an entire state response, based on which we give a sufficient and necessary condition of impulse-free for SFOS.Consider the regular SFOSED^αx(t)=Ax(t)+Bu(t)where E∈ ℝ ^n× n, 1<α<2 and the initial condition x(0)=x_0, t≥0.When t≥0, the state response to SFOS (<ref>) isx(t)=P[ [ x_1(t); x_2(t) ]] wherex_1(t)=E_α,1(A_1t^α)x_10+ ∫_t_0^t (t-τ)^α-1E_α,α(A_1(t-τ)^α) B_1u(τ)dτ x_2(t) =-k=1h-1 ∑ N^k(δ^(kα-1)(t) x_20+δ^(kα-2)(t)x_20^( 1)) -k=0h-1 ∑ N^kB_2(D^kαu(t)+j=0 m-1 ∑ u_0^(j)δ^(kα-j-1)(t))x_1(t)∈ ℝ ^n_1, x_2(t)∈ ℝ ^n_2, n_1+n_2=n, the initial condition x_1(0)=x_10, x_2(0)=x_20, .x_2(0)=x_20^(1); N∈ ℝ ^n_2× n_2 is nilpotent and the nilpotent index is denoted by h; u(t) is h times piecewise continuously differentiable, the initial condition u^(j)(0)=u_0^(j); m is an integer and m-1<kα≤ m. E_α,β is the two-parameter function of the Mittag-Leffler type. P satisfies Theorem <ref>. When t>0 and the initial conditionx(0_+)=P[ [ I; 0 ]]x_10-P[ [ 0; I ]]k=1h-1 ∑ M^-1N^kδ^(kα-2)(0_+) x_20^(1)-P[ [ 0; I ]]k=0h-1 ∑ M^-1N^kB_2(D^kαu(0_+)+j=0m-1 ∑ u^(j)(0)δ^(kα-j-1)(0_+) )is satisfied, then the solution (<ref>) to system (<ref> ) is unique and M=I+k=1h-1 ∑ N^kδ^(kα -1 )( 0_+).Since the system is regular, two nonsingular matrices P,Q may be chosen and the system (<ref>) is equivalent toD^αx_1(t)=A_1x_1(t) +B_1u(t) ND^αx_2(t)=x_2(t) +B_2u(t)where QB=[B_1^TB_2^T]^T. Subsystems (<ref>) and (<ref>) are termed finite subsystem and infinite subsystem, respectively, which are similar to SIOS.Finite subsystem (<ref>) is an normal fractional order system. For the piecewise continuously differentiable input u(t), the state response to the subsystem (<ref>) isx_1(t)=E_α,1(A_1t^α)x_10+ ∫_t_0^t (t-τ)^α-1E_α,α(A_1(t-τ)^α) B_1u(τ)dτFor the infinite subsystem (<ref>), invoking Laplace transform and (sN-I)^-1=-k=0h-1∑s^kN^kX_2(s) =(s^αN-I)^-1[N(s^α-1x_20 +s^α-2x_20^(1))+B_2U(s)] =-k=1h-1∑N^k(s^kα-1 x_20+s^kα-2x_20^(1))-k=0h-1∑s^kαN^kB_2U(s)where X_2(s)=ℒ[x_2(t)], U(s)=ℒ[u(t)]. For the piecewise continuously differentiable input u(t), by invoking inverse Laplace transform of (<ref>),we getx_2(t) =-k=1h-1 ∑ N^k(δ^(kα-1)(t) x_20+δ^(kα-2)(t)x_20^( 1)) -k=0h-1 ∑ N^kB_2(D^kαu(t)+j=0 m-1 ∑ u^(j)(0)δ^(kα-j-1)(t))Now, we get state response (<ref>) with equations (<ref>) and (<ref>).For arbitrary initial conditions, some of them may not satisfy solution (<ref>) at t=0, which leads to discontinuous behavior at t=0. Since discontinuous behavior is not desirable, the set of x(0) which does not result in discontinuous behavior at t=0 is called the set of admissible initial conditions <cit.>. The following analyses the admissible initial conditions of system (<ref>).With t→0_+, equation (<ref>) turns intox_2(0_+)=-k=1h-1 ∑ N^k(δ^(kα-1)(0_+) x_20+δ^(kα-2)(0_+)x_20 ^(1)) -k=0h-1 ∑ N^kB_2(D^kαu(0_+)+j=0 m-1 ∑ u^(j)(0)δ^(kα-j-1)(0_+) )i.e.[I+k=1h-1 ∑ N^kδ^(kα-1)(0_+)] x_20 =-k=1h-1 ∑ N^kδ^(kα-2)(0_+)x_20 ^(1) -k=0h-1 ∑ N^kB_2(D^kαu(0_+)+j=0 m-1 ∑ u^(j)(0)δ^(kα-j-1)(0_+) )Let M=[I+k=1h-1 ∑ N^kδ^(kα -1 )( 0_+)]. N^k is an upper triangular matrix and all the elements of main diagonal are zero since N is nilpotent. On the other hand, if (kα-1) is not an integer, then δ^(kα-1 )(0_+) can not be zero and it's a very large number but not an infinite number. Therefore, the matrix M is invertible and the admissible initial conditions of system (<ref>) isx_20= -M^-1(k=1h-1∑ N^kδ^(kα-2)(0_+)x_20^(1) +k=0h-1∑ N^kB_2( D^kαu( 0_+) +j=0m-1∑u^(j)(0)δ^( kα-j-1) ( 0_+) ))The theorem is proved.For arbitrary initial conditions, if the state response to SFOS does not include impulsive response, then the system is termed impulse-free. Obiviously, the state response to SFOS (<ref>) is similar to the state response of SIOS. x_1(t) is the state response to the finite subsystem, which is represented by Mittag-Leffler function. x_2(t) is the state response to the infinite subsystem, which is composed of impulse function and input function. Based on state response (<ref>), the following analyses the impulsive behavior of system (<ref>). Because substate x_1(t) is continuous, we focus on substate x_2(t).(1) If t=0Without loss of generality, let u(t)=0. If x_2(0)≠0 and x_2(0)∉(N), there holds x_2(t)=-k=1h-1 ∑ N^k(δ^(kα-1)(t) x_20+δ^(kα-2)(t)x_20^( 1))If t→0, then δ^(β)(t) →∞ (β>0). Thus, x_20 which doesn't satisfy admissible initial condition may reault in impulse.(2) If t>0x_2(t) can be represented asx_2(t)= -k=1h-1 ∑ N^k(δ^(kα-1)(t) x_20+δ^(kα-2)(t)x_20^( 1)) -k=0h-1 ∑ N^kB_2(D^kαu(t)+j=0 m-1 ∑ u^(j)(0)δ^(kα-j-1)(t))For δ^(b), if b is a positive integer, the support set of δ^(b)(t) is {0}, which means δ^(b)=0; If b is not a positive integer, the support set of δ^(b)(t) is not {0}, which means δ^(b)( t)≠0 when t>0. Therefore, if N≠0, then x_20 ,x_20^(1),u^(j)(0) participate in the dynamic process of x_2(t) and the state response of SFOS includes impulsive response. The x_20 and u^(j)(0) of SFOS participate in the dynamic process of substate x_2(t), which is very different from SIOS. The final value theorem of fractional order system <cit.>, D^α-1x(∞)=s→0lims^αX(s) Re(s)>0Therefore, we get δ^(β)(t)→0 when t→∞. It implies that terms including δ^(β)( t) on the right side of the equation (<ref>) do not impact on the stability of the infinite subsystem (<ref>), which is convenient when analysing the stability of SFOS.From time 0 to time t, the input u(t) always has an influence on the state x_2(t) because of the properties of Caputo fractional derivative. Thus, change of u(t) can not be reflected immediately by substate x_2(t) at the time t and jump behavior will not appear in the state response. The input u(t) will not give rise to the jump behavior of x_2(t), which is also very different from SIOS. To sum up, we get the following theorem. For arbitrary initial conditions, the regular SFOS (<ref>) is impulse-free if and only if N=0. N comes from the decompositionQEP=diag(I_n_1,N)QAP=diag(A_1,I_n_2)Similar to paper <cit.>, the following gives another condition of impulse-free. The following statements are equivalent: * the regular system (E,A,α) is impulse-free;* there exist a vector υ∈ℝ^n and a vector ω∈ℝ^n such thatEυ =0Aυ =Eωthen υ=0. According to Theorem <ref>, regular system (E,A,α) has the decomposition thatQEP=diag(I,N)QAP=diag(A,I) Thus,{[Eυ=0; Aυ=Eω ].⇔ {[QEPP^-1υ=0; QAPP^-1υ=QEPP^-1ω ].⇔ {[ [ I 0; 0 N ][ υ_1; υ_2 ] =0; [ A 0; 0 I ][ υ_1; υ_2 ]=[ I 0; 0 N ][ ω_1; ω_2 ] ].⇔ {[υ_1=0; Nυ_2=0; Aυ_1=ω_1; υ_2=Nω_2 ].ω_2 is not specific, therefore, υ_2=0 if and only if N=0, which is the sufficient and necessary condition of impulse-free. Because P is nonsingular, we can conclude that υ=0 if and only if N=0.This ends the proof.The regular SFOS (<ref>) is impulse-free if and only if there exists no infinite eigenvector of order 2, i.e. υ^2.According to Lemma <ref>, the sufficient and necessary condition ofimpulse-free isEυ^1=0Eυ^2=Aυ^1=0 which implies that the infinite eigenvector of order 2 does not exist. This ends the proof. § STABILITY AND ADMISSIBILITY ANALYSIS§.§ Asymptotic Stability Stability is very important in control theory. In <cit.>, a sufficient condition of asymptotic stability is given, but the condition demands free impulse. Meanwhile, <cit.> also gives a sufficient condition of asymptotic stability, but its fractional order is 0<α<1. The following gives a sufficient and necessary condition of asymptotic stability with fractional order 1<α<2, which is simpler than the condition in <cit.>. Consider the autonomous regular SFOSED^αx(t)=Ax(t)where x(t)∈ ℝ ^n, 1<α<2.The following will analyse the asymptotic stability (stability for short) of SFOS. Firstly, the definition of the stability of SFOS is given as follows. For arbitrary admissible initial condition x(0), if regular SFOS (<ref>) satisfies t→+∞lim‖ x(t)‖ =0, then the SFOS (<ref>) is called asymptotically stable. The characteristic polynomial of system (<ref>) isΔ(s)=(s^αE-A)=a_n_^1 (s^α)^n_1+⋯+a_1s^α+a_0It's obvious that the polynomial Δ(s) is a multivalued function of s, of which the fractional degree is n_1 (n_1≤ n). Let s^α=ω, then Δ(s) turns into a single-valued function _Δ(ω)=(ω E-A). Δ(s) has a lot of roots but only the roots on the principal Riemann surface Ω={s|-π≤(s)<π} decide the time-domain behavior and stability of fractional system (<cit.>). Therefore, the physical domain of Δ(s) is defined on the principal Riemann surface. And the finite roots of Δ(s) on the principal Riemann surface Ω are defined as the finite roots of SFOS (<ref>).Fractional order system <cit.>D^αx(t)=Ax(t), 1<α<2is asymptotically stable if and only if |( spec(A))|>απ/2, where spec(A) is the spectrum (set of all eigenvalues) of A. Also, state vector x(t) decays towards 0 and meets the following condition: ‖ x(t)‖ <Kt^-α,t>0,K>0. SFOS (<ref>) is asymptotically stable if and only if|(spec(E,A))| >π/2αwhere spec(E,A) is the spectrum (set of all eigenvalues) of (E,A,α).Because the system (<ref>) is regular, two nonsingular matrices Q,P may be chosen such that system (<ref>) is equivalent toD^αx_1(t)=A_1x_1(t) ND^αx_2(t)=x_2(t)where [x_1^T(t) x_2^T(t) ]^T=P^-1x, QEP=diag(I_n_1,N), QAP=diag(A_1,I_n-n_1). According to theorem <ref>, the state response to system (<ref>) isx_1(t)=E_α,1(A_1t^α) x_10x_2(t)=-k=1h-1 ∑ N^k(δ^(kα-1)(t) x_20+δ^(kα-2)(t)x_20^( 1))According to lemma <ref>, finite subsystem (<ref>) is stable if and only if |(spec(A_1) )| > απ/2 For the infinite subsystem (<ref>), according to the final value theorem (<ref>), when t→+∞, the state response x_2(t)→0. Thus, the infinite subsystem is essentially stable.On the other hand, (sN-I_n-n_1)=(-1)^n-n_1 because N is nilpotent. Thus, spec(N,I_n-n_1)=∅ andspec(E,A) =spec(QEP,QAP)=spec(diag(I_n_1,N),diag(A_1,I_n-n_1))=spec(I_n_1,A_1)∪ spec(N,I_n-n_1)=spec(A_1)∪∅=spec(A_1)i.e. spec(E,A)=spec(A_1). The theorem is proved.§.§ Admissibility In <cit.>, a sufficient and necessary condition of admissibility with fractional order 0<α<1 is given. The following gives a sufficient and necessary condition of admissibility with fractional order 1<α<2.Similarly to the admissibility of SIOS, the following gives the definition of admissibility for SFOS.If a SFOS is regular, impulse-free and stable, then the SFOS is termed admissible. From the above analysis, we can know that the sufficient and necessary conditions of regularity, free impulse, stability for SFOS are only related to matrices E,A and fractional order α. Thus, the admissibility of SFOS is only related to E,A and α.According to theorem <ref>, SFOS (<ref>) is stable if and only if all the finite eigenvalues of SFOS belong to the region Λ={λ∈ ℂ ||(λ)| >πα/2}. When 1<α<2, Λ is a LMI region. Thus, we can analyse the admissibility of system (E,A,α) via 𝒟-stable theorem. If system (E,A,α) is regular, impulse-free and all the finite eigenvalues belong to the region 𝒟, then system (E,A,α) is termed 𝒟-admissible. Let rank(E)=r, E_0∈ℝ^n×( n-r) column full rank and E^TE_0=0. SFOS is 𝒟-admissible if and only if there exist symmetric positive matrix P∈ ℝ ^n× n and matrix Q∈ ℝ ^(n-r)× n such thatM_s(E,A,P,Q)<0whereM_s(E,A,P,Q)=Φ⊗(E^TPE)+Sym{Ψ⊗(E^TPA)+I_d⊗(Q^TE_0 ^TA)}Φ∈ ℝ ^d× d is a symmetric martrix and Ψ∈ ℝ ^d× d, I_d has the dimension d× d. Sufficiency.We will prove it by contradiction. Assume that SFOS is impulsive. According to Theorem <ref>, there exists eigenvector υ^2∈ ℝ ^n such that Eυ^2=Aυ^1 and Eυ^1=0. By left multiplying (I_d⊗ v^1)^T and right multiplying (I_d⊗ v^1) on (<ref>), we have(I_d⊗ v^1)^TM_s(E,A,P,Q)( I_d⊗ v^1)<0i.e.I_d⊗[(v^2)^T(E^TE_0Q+Q^T E_0^TE)v^2]<0Because E^TE_0=0, the inequation (<ref>) can't be true. Thus, system (E,A,α) is impulse-free.Let λ be the finite eigenvalue of system (E,A,α) and υ be the eigenvector, then we get Aυ=λ Eυ and υ^∗A^T=_λυ^∗E^T. From inequality (<ref>), we get(I_d⊗ v)^∗M_s(E,A,P,Q)( I_d⊗ v) =Φ⊗(v^∗E^TPEv)+Ψ⊗( λ v^∗E^TPEv)+Ψ^T⊗(λ̅v^∗E^TPEv) =v^∗E^TPEv(Φ+λΨ+λ̅Ψ^T) =v^∗E^TPEvf_𝒟(λ)<0Because P>0, we can get f_𝒟(λ)<0. According to the definition (<ref>), we can conclude that system (E,A,α) is 𝒟-admissible.Necessity.Because the system (E,A,α) is regular and impulse-free, there exist two nonsingular matrices M and N such thatMEN =[ [ M_1; M_2 ]]E[ [ N_1 N_2 ]]=[ [ I_r 0; 0 0 ]]MAN =[ [ M_1; M_2 ]]A[ [ N_1 N_2 ]]=[ [ A_1 0; 0 I_n-r ]]where M_1∈ ℝ ^r× n, N_1∈ ℝ ^n× r. Obiviously, M_2AN=[ [ 0 I_n-r ]], M_2E=0.The system (E,A,α) is 𝒟-admissible, thus we get λ(E,A)=λ(A_1), i.e. A_1 is 𝒟-stable. According to lemma <ref>, there exists a symmetric real matrix P_1>0 such thatΦ⊗ P_1+Ψ⊗(P_1A_1)+Ψ^T ⊗(A_1^TP_1)<0Since it's a strict inequality, there must exist a sufficiently small ε>0 such thatΦ⊗ P_1+Ψ⊗(P_1A_1)+Ψ^T ⊗(A_1^TP_1)+I_d⊗(ε/2N_1^TN_1)<0i.e.Φ⊗ P_1+Ψ⊗(P_1A_1)+Ψ^T ⊗(A_1^TP_1)+[I_d⊗(ε N_1^TN_2)][I_d⊗(2ε N_2^TN_2)] ^-1[I_d⊗(ε N_2^TN_1)] <0Invoking Schur complement, inequality (<ref>) is equivalent to[ [ Φ⊗ P_1+Ψ⊗(P_1A_1)+Ψ^T ⊗(A_1^TP_1) -I_d⊗(ε N_1^TN_2); ∙-I_d⊗(2ε N_2^TN_2) ]]<0 ⇔Φ⊗[ [ P_1 0; 0 0 ]]+Sym{Ψ⊗[ [ P_1A_10;00 ]]+I_d⊗([ [ 0; I_n-r ]]....[ [ -ε N_2^TN_1 -ε N_2^TN_2 ]])}<0 ⇔Φ⊗([ [ I_r 0; 0 0 ]][ [ P_1 0; 0 I_n-r ]][ [ I_r 0; 0 0 ]])+Sym{Ψ⊗([ [ I_r 0; 0 0 ]]....[ [ P_1 0; 0 I_n-r ]][ [ A_1 0; 0 I_n-r ]])+I_d⊗([ [ 0; I_n-r ]](-ε N_2^T)N)}<0Utilizing (<ref>) and let P̂=[ [ P_1 0; 0 I_n-r ]]>0, we haveΦ⊗(N^TE^TM^TP̂MEN)+Sym{Ψ⊗(N^TE^TM^TP̂MAN)..+I_d⊗(N^TA^TM_2^T(-ε N_2 ^T)N)}<0Let M^TP̂M=P, M_2^T=E_0 and -ε N_2^T=Q, where E_0 is column full rank and E^TE_0=0. Since N is nonsingular, we getΦ⊗(E^TPE)+Ψ⊗(E^TPA) +Ψ^T⊗(A^TPE)+I_d⊗(A^TE_0 Q+Q^TE_0^TA)<0The theorem is proved. The condition (<ref>) is a strict linear matrix inequality. In order to analyse the Robust problems of SFOS conveniently, the nonstrict LMI condition is given as follows. <cit.> SIOS (E_I,A_I) is 𝒟_Γ (whenΦ=0) admissible if and only if there exists a matrix P∈ℝ^n× n such thatSym{Ψ⊗(PA_I)}<0PE_I =E_I^TP^T ≥0Note that conditions (<ref>) and (<ref>) do not have to be regular and impulse-free, thus they can be used generally. For (E,A,α), replace E_I,A_I by E,A respectively, the lemma <ref> is also true because the eigenvalues of SFOS and SIOS are equivalent. (E,A,α) with 1<α<2 is admissible if and only if there exist matrices P=P^T>0, P∈ ℝ ^n× n, Q=Q^T, Q∈ ℝ ^n× n such thatSym{Θ⊗(A^TPE)}+I_2 ⊗(A^TQA)<0E^TQE ≥0where Θ=[ [sinπ/2α -cosπ/2α;cosπ/2αsinπ/2α ]], I_2 has the same dimension with Θ.Sufficiency. Assume that (E,A,α) is impulsive, then there exists an infinite eigenvector of order 2, υ^2∈ ℝ ^n such that Eυ^2=Aυ^1, Eυ^1=0. By left multiplying (I_2⊗υ^1)^∗ and right multiplying (I_2⊗υ^1) on (<ref>), we have(I_2⊗υ^1)^∗[Sym{Θ⊗ (A^TPE)}+I_2⊗(A^TQA)](I_2⊗υ ^1)= I_2⊗[(υ^1)^∗A^TQAυ^1] = I_2⊗[(υ^2)^∗E^TQEυ^2]<0Inequality (<ref>) can't be true because E^TQE≥0. Thus, system (E,A,α) is impulse-free and two matrices M and N may be chosen such thatMEN= [ I_m 0; 0 0 ] ,MAN= [ A_1 0; 0 I_n-m ]Let Y=M^-TPM^-1= [ Y_11 Y_22; Y_12^∗ Y_22 ]. Obviously, there holds Y=Y^∗>0. Let Q=M^-TQM^-1= [ Q_11 Q_22; Q_12^T Q_22 ]. From (<ref>) we haveN^TE^TQEN=N^TE^TM^TQMEN= [ I_m 0; 0 0 ] [ Q_11 Q_12; Q_12^T Q_22 ] [ I_m 0; 0 0 ]= [ Q_110;00 ]≥0Thus, we get Q_11≥0.Left multiply (I_2⊗ N)^T and right multiply I_2⊗ N on (<ref>), we get(I_2⊗ N)^T(Sym{Θ⊗(A^TPE)} +I_2⊗(A^TQA))(I_2⊗ N)= Sym{Θ⊗(N^TA^TM^TPMEN)}+I_2 ⊗(N^TA^TM^TQMAN)= Sym{Θ⊗( [ A_1^T 0; 0 I_n-m ] [ Y_11 Y_12;∙ Y_22 ] [ I_m 0; 0 0 ])}+I_2⊗( [ A_1^T 0; 0 I_n-m ] [ Q_11 Q_12;∙ Q_22 ] [ A_1 0; 0 I_n-m ]) = Sym{Θ⊗ [ A_1^TY_11 0;Y_12^T 0 ]}+I_2⊗ [ A_1^TQ_11A_1A_1^TQ_12;∙ Q_22 ] <0According to <cit.>, inequality (<ref>) is equivalent toSym{[ Θ⊗(A_1^TY_11) 0; Θ⊗ Y_12^T 0 ]}+ [ I_2⊗(A_1^TS_11A_1) I_2⊗(A_1^T Q_12);∙ I_2⊗Q_22 ] <0Thus, we getSym{Θ⊗(A_1^TY_11)}+I_2⊗( A_1^TQ_11A_1)<0Because Q_11≥0, we have Sym{Θ⊗(A_1 ^TY_11)}<0. According to lemma <ref>, A_1 is stable. Thus (E,A,α) is stable. Finally, the admissibility of (E,A,α) is achieved.Necessary.(E,A,α) is admissible, thus there exist two nonsingular matrices M,N,Y_11 such thatMEN= [ I_m 0; 0 0 ] ,MAN= [ A_1 0; 0 I_n-m ] ,Sym{Θ⊗(A_1^TY_11)}<0.For a sufficiently small ε >0, we haveSym{Θ⊗(A_1^TY_11)}+I_2⊗( A_1^Tε Y_11A_1)<0Note P =M^TYM=M^T [Y_11 0; 0 I_n-m ]M Q =M^TQM=M^T [ Q_110;0Q_2 ]Mwhere Q_11=ε Y_11 and Q_22 be any negative definite matrix. From (<ref>) we get[ Sym{Θ⊗(A_1^TY_11)}0;00 ] + [ I_2⊗(A_1^Tε Y_11A_1)0;0 I_2⊗Q_22 ] <0.which is equivalent toSym{Θ⊗ [ A_1^TY_11 0; 0 0 ]}+I_2⊗ [ A_1^Tε Y_11A_10;0 Q_22 ] <0 ⇔ Sym{Θ⊗(N^TA^TM^T PMEN)}+I_2⊗(N^TA^TM^TQMAN)<0 ⇔ (I_2⊗ N^T)[Sym{Θ⊗ (A^TPE)}+I_2⊗(A^TQA)](I_2⊗ N)<0 ⇔ Sym{Θ⊗(A^TPE)}+I_2 ⊗(A^TQA)<0and[ Q_110;00 ]≥0⇔ N^TE^TM^TQMEN≥0⇔ N^TE^TQEN≥0⇔ E^TQE≥0This completes the proof. The following statements are equivalent* System (E,A,α) (1<α<2) is admissible; * Assume that E_0∈ℝ^n×(n-r) is column full rank and E^TE_0=0, there exist symmetric positive matrix P∈ ℝ ^n× n and metrix Q∈ℝ^(n-r)× n such thatSym{Θ⊗(E^TPA)+I⊗(Q^T E_0^TA)}<0 * There exists matrix P∈ℝ^n× n such thatSym{Θ⊗(PA)}<0PE=E^TP^T≥0 * there exist symmetric positive matrix P∈ ℝ ^n× n and symmetric matrix Q∈ ℝ ^n× n such thatSym{Θ⊗(E^TPA)}+I⊗( A^TQA)<0E^TQE ≥0where Θ=[ [sinπ/2α -cosπ/2α;cosπ/2αsinπ/2α ]] and I has the same dimension with Θ. When 1<α<2, the stable region of SFOS is a LMI region, in which case Φ=0 and Ψ=[ [sinπ/2α -cosπ/2α;cosπ/2αsinπ/2α ]]. For such Φ and Ψ and according to Theorem <ref>, Lemma <ref> and Lemma <ref> , the conclusion is achieved. § ROBUST ADMISSIBILITY ANALYSIS To the best of our knowledge, there exists no research on robust admissibility of uncertain SFOS. In this section, sufficient conditions are given to check the robust admissibility of the uncertain SFOS.Consider the following uncertain SFOS ED^αx(t)=Ax(t)=(A_0+D_AF_AE_A)x(t)where 1<α<2 and A_0∈ℝ^n× n, D_A∈ℝ^n× p,E_A∈ℝ^q× n are given certain matrices. The uncertain matrix F_A∈ℝ^p× q satisfiesF_AF_A^T<I_p System (<ref>) is robust admissible if there exist matrices X=X^T>0, X∈ ℝ ^n× n, S∈ ℝ ^(n-m)× n such that[ Z_11 Z_12 Z_13;∙ Z_22 Z_23;∙∙ Z_33 ]<0where E^TE_0=0, I_2 is a 2×2 matrix and Θ=[ [sinπ/2α -cosπ/2α;cosπ/2αsinπ/2α ]] and Z_11= Sym{Θ⊗(A_0^TXE)+I_2⊗(A_0^TE_0S)} +2I_2⊗(E_A^TE_A)Z_12= I_2⊗(E^TXD_A)Z_13= I_2⊗(S^TE_0^TD_A)Z_22= -I_2⊗ I Z_23= 0Z_33= -I_2⊗ I Invoking Schur complement, inequality (<ref>) is equivalent toSym{Θ⊗(A_0^TXE)+I_2⊗(A_0^T E_0S)}+2I_2⊗(E_A^TE_A)+ I_2⊗((E^TXD_AD_A^TXE))+I_2 ⊗((S^TE_0^TD_AD_A^TE_0S))<0From (<ref>) and (<ref>), we getSym{Θ⊗(A_0^TXE)+I_2⊗(A_0^T E_0S)}+(ΘΘ^T)⊗(E_A^TE_A)+I_2⊗ (E_A^TE_A)+ I_2⊗((E^TXD_AF_AF_A^TD_A^TXE)) +I_2⊗((S^TE_0^TD_AF_AF_A^TD_A^T E_0S))<0According to lemma <ref> and inequality (<ref>), we getSym{Θ⊗(A_0^TXE)+I_2⊗(A_0^T E_0S)}+Sym{Θ⊗((D_AF_AE_A)^TXE)}+Sym{ I_2⊗((D_AF_AE_A)^T(E_0S))}<0 ⇔ Sym{Θ⊗((A_0+D_AF_A E_A)^TXE)+I_2⊗((A_0+D_AF_AE_A)^T E_0S)}<0 ⇔ Sym{Θ⊗(A^TXE) +I_2⊗(A^TE_0S)}<0Thus, according to theorem <ref>, system (<ref>) is robust admissible.The theorem is proved.System (<ref>) is robust admissible if there exist matrices X=X^T>0,X∈ℝ^n× n,Y=Y^T>0,Y∈ℝ^n× n and S=S^T,S∈ℝ^n× n such that[ Z_11 Z_12 Z_13;∙ Z_22 Z_23;∙∙ Z_33 ]<0 E^TSE≥0where Θ=[ [sinπ/2α -cosπ/2α;cosπ/2αsinπ/2α ]] and Z_11= Sym{Θ⊗(A_0^TXE)} +I_2⊗(A_0^TSA_0)Z_12=Θ^T⊗(E^TXD_A)+I_2⊗(A_0^TSD_A)Z_13= I_2⊗ YE_A^TZ_22= I_2⊗(S-Y)Z_23= 0Z_33= -I_2⊗ Y According to theorem <ref>, uncertain system (<ref>) is robust admissible if for any F_A, there holdsSym{Θ⊗[(A_0+D_AF_AE_A)^TXE] }+ I_2⊗[(A_0+D_AF_AE_A)^TS(A_0+D_AF_A E_A)]<0i.e.(I_2⊗υ^T)(Sym{Θ⊗[ (A_0+D_AF_AE_A)^TXE]}+ I_2⊗[(A_0+D_AF_AE_A)^TS(A_0+D_AF_A E_A)])(I_2⊗υ)<0The inequality (<ref>) can be rewritten as[ I_2⊗υ^TI_2⊗(F_AE_Aυ)^T ] × [ Sym{Θ⊗(A_0^TXE)}+I_2⊗(A_0^T SA_0) Θ^T⊗(E^TXD_A)+I_2⊗(A_0^TSD_A); ∙I_2⊗ S ] × [ I_2⊗υ; I_2⊗(F_AE_Aυ) ] <0From inequality (<ref>), we get[I_2⊗υ^TI_2⊗(F_AE_Aυ )^T] [ I_2⊗(E_A^TE_A)0;0-I_2⊗ I ] [ I_2⊗υ; I_2⊗(F_AE_Aυ) ]= (I_2⊗υ^T)[I_2⊗( E_A^TE_A-E_A^TF_A^TF_AE_A)]( I_2⊗υ) = (I_2⊗υ^T)[I_2⊗( E_A^TE_A(I-F_A^TF_A))](I_2⊗υ)>0By applying the S-procedure, inequalities (<ref>) and (<ref>) derive that there exists some scalar τ>0 such that[ Sym{Θ⊗(A_0^TXE)}+I_2⊗(A_0^T SA_0) Θ^T⊗(E^TXD_A)+I_2⊗(A_0^TSD_A); ∙I_2⊗ S ]+τ [ I_2⊗(E_A^TE_A)0;0-I_2⊗ I ] <0 ⇔ [ Sym{Θ⊗(A_0^TXE)}+I_2⊗(A_0^T SA_0) Θ^T⊗(E^TXD)+I_2⊗(A_0^TSD); ∙I_2⊗ S-I_2⊗(τ I) ]+τ [ I_2⊗ E_A^T;0 ] [ I_2⊗ E_A0 ] <0Let Y=τ I and invoking Schur Complement, inequality (<ref>) is obtained.The theorem is proved.System (<ref>) is robust admissible if there exist matrix X∈ ℝ ^n× n, such that[ [ Sym{Θ⊗(A_0^TX)}+I_2⊗(E_A^TE_A) I_2⊗(XD_A);∙-I_2⊗ I ]]<0 E^TX=XE≥0where Θ=[ [sinπ/2α -cosπ/2α;cosπ/2αsinπ/2α ]].Invoking Schur Complement, inequality (<ref>) is equivalent toSym{Θ⊗(A_0^TX)}+I_2⊗(E_A^T E_A)+I_2⊗(XD_AD_A^TX)<0From inequalities (<ref>) and (<ref>), we getSym{Θ⊗(A_0^TX)}+(ΘΘ^T )⊗(E_A^TE_A)+I_2⊗(XD_AF_AF_A^TD_A^TX)<0According to lemma <ref> and inequality (<ref>), we haveSym{Θ⊗(A_0^TX)}+Sym{Θ⊗((D_AF_AE_A)^TX)}<0 ⇔ Sym{Θ⊗((A_0+D_AF_A E_A)^TX)}<0 ⇔ Sym{Θ⊗(A^TX)} <0.Thus, according to theorem <ref>, the system (<ref>) is robust admissible.The theorem is proved. § NUMERICAL EXAMPLES§.§ Numerical solution in time domain Now we will get the numerical solution of SFOS.System (E,A,α) can be decomposed into{[ D^αx_1(t)=A_1x_1(t); ND^αx_2(t)=x_2(t) ].Thus we have to get A_1 and N. N'Doye <cit.> has proved that system (E,A,α) is regular if and only if det(cE-A) is not identically zero. Thus, (cE-A)^-1 exists. DefineÊ=(cE-A)^-1E, Â=(cE-A)^-1AThus =(cE-A)^-1(cE+A-cE)=c(cE-A)^-1E-I=cÊ-IAccording to standard Jordan matrix decomposition, there exists nonsingular matrix T such thatTÊT^-1=diag(Ê_1,Ê_2)where T∈ℝ^n× n; Ê_1∈ℝ^n_1× n_1 is nonsingular; Ê_2∈ℝ^n_2× n_2 is a nilpotent matrix. Thus, cÊ_2-I is nonsingular. LetQ=diag(Ê_1^-1,(cÊ_2-I)^-1)T(cE-A)^-1P=T^-1Then, we getQEP =diag(Ê_1^-1,(cÊ_2-I)^-1)T(cE-A)^-1ET^-1=diag(I_n_1,(cÊ_2-I)^-1Ê_2)andQAP =diag(Ê_1^-1,(cÊ_2-I)^-1)T(cE-A)^-1AT^-1=diag(cI_n_1-Ê_1^-1,I_n_2)Therefore, we finally get A_1 and NA_1=Ê_1^-1(cÊ_1-I), N=(cÊ_2-I)^-1Ê_2Because A_1 is a constant matrix, by using Riemann-Liouville fractional integral, we get (<cit.>)x_1(t)-k=0m-1 ∑ x_1^(k)(0)t^k/k!=A_1/Γ(α)∫_0^tx_1(τ)(t-τ)^α-1dτwhere m-1<α≤ m.And according to Diethelm <cit.>∫_0^t_n+1(t_n+1-τ)^α-1x_1(t)dτ≈z^α/α(α+1)j=0n+1∑ a_j,n+1x_1(t_j)where z=t_j+1-t_j anda_j,n+1={ n^α+1-(n-α)(n+1)^α, if j=0(n-j+2)^α+1+(n-j)^α+1-2(n-j+1)^α+1, if 1≤ j≤ n1, if j=n+1 .In order to calculate x_1(t_n+1), Diethelm <cit.> predicts the integral as ∫_0^t_n+1(t_n+1-τ)^α-1x_1(τ)dτ≈∑_j=0^nb_j,n+1x_1(t_j)where b_j,n+1=z^αα((n+1-j)^α-(n-j)^α). Thus, x_1(t_n+1) can be calculated by x_1(t_n+1)=k=0m-1 ∑ x_1^(k)(0)t^kk!+z^αΓ(α+2) A_1x_1^p(t_n+1)+z^αΓ(α+2)A_1 ∑_j=0^na_j,n+1x_1(t_j)where x_1^p(t_n+1)=k=0m-1 ∑ x_1^(k)(0)t^kk!+1Γ(α)A_1 ∑_j=0^nb_j,n+1x_1(t_j).And according to solution (<ref>), x_2(t_n) can directly be calculated byx_2(t_n)=-k=1h-1 ∑ N^k(δ^(kα-1)(t_n) x_20+δ^(kα-2)(t_n)x_20 ^(1))Finally, we can getx(t_n)=P[ [ x_1(t_n); x_2(t_n) ]] §.§ Numerical solution In this section, we verify the inequality (<ref>) of theorem <ref> as an example.Consider system (E,A,α) with parameters α=1.8, A=[ [ -10 -1;0 -20;0 -1 -1 ]], E=[ [100;11 -1;000 ]]. And E_0=[ [ 0; 0; 1 ]] can be chosen to satisy E^TE_0=0.Then, by solving LMI (<ref>), we getP=[ [1.7896 -0.2755 -0.5029; -0.27550.8271 -0.6667; -0.5029 -0.66671.5113 ]],Q=[ [ -0.043 0.3709 0.3849 ] ] It means the system is admissible. Eigenvalues of the system is shown in figure <ref>. From figure <ref>, we can find that all the eigenvalues of (E,A) lie in the stable area. State response of the system is shown in figure <ref>, which implies that the system is stable. § CONCLUSION In this paper, singular fractional order system with fractional order 1<α<2 has been studied. The regularity and impulse-free of SFOS are proved in time domain.Then, this paper analysed sufficient and necessary conditions of stability and admissibility, respectively.After that, sufficient conditions of robust admissibility were given. Finally, numerical example was illustratedto verify proposed theorem. unsrt
http://arxiv.org/abs/1704.08423v1
{ "authors": [ "Xiaogang Zhu", "Jie Xu", "Junguo Lu" ], "categories": [ "math.OC", "93C05" ], "primary_category": "math.OC", "published": "20170427035822", "title": "Basic Properties of Singular Fractional Order System with order (1,2)" }
APS/[email protected] Goethe-Universität Frankfurt, Frankfurt am Main, GermanyGoethe-Universität Frankfurt, Frankfurt am Main, GermanyGoethe-Universität Frankfurt, Frankfurt am Main, GermanyCENBG, Gradignan, FranceKarlsruhe Institute of Technology, Karlsruhe, GermanyGSI Helmholtzzentrum für Schwerionenforschung, Darmstadt, GermanyGoethe-Universität Frankfurt, Frankfurt am Main, GermanyWe discuss the possibility to build a neutron target for nuclear reaction studies in inverse kinematics utilizing a storage ring and radioactive ion beams. The proposed neutron target is a specially designed spallation target surrounded by a large moderator of heavy water (D_2O). We present the resulting neutron spectra and their properties as a target. We discuss possible realizations at different experimental facilities.25.40.Lw, 26.20.+f, 28.41.-i, 29.20.db A spallation-based neutron target for direct studies of neutron-induced reactions in inverse kinematics Mario Weigand December 30, 2023 ======================================================================================================= § INTRODUCTIONNeutron capture cross sections of unstable isotopes are important for neutron induced nucleosynthesis as well as for technological applications. The traditional time-of-flight method <cit.> reaches its limits once the necessary detection of the reaction products is hampered by the size (mass) of the sample. Several factors may limit the sample mass: (a) The decay properties of radioactive isotopes interfere with the signals from the neutron capture or neutron-induced fission reactions <cit.>. (b) The limited range of charged reaction products requires a thin sample. In both cases, an increased neutron fluence at the sample position with ever improved neutron sources overcomes the lack of reaction rate <cit.>.Reference <cit.> proposed a combination of a radioactive beam facility, an ion storage ring and a high flux reactor to allow direct measurements of neutron-induced reactions over a wide energy range on isotopes with half lives down to minutes. The authors discussed specific reactions, detection techniques and counting rates. Here, we present the possibility to replace the rather demanding reactor with a specially designed spallation neutron source. FIG. <ref> shows a sketch of the proposed setup. The advantages of such a setup over the reactor approach are manifold: No critical assembly is required, and therefore, the safety and security regulations are much less stringent. No actinides at all are used or produced. In particular, no minor actinides are produced avoiding long-lived radioactive waste. Last but not least, there are considerably less γ-rays per neutron.Similar neutron densities as in a research reactor can be reached in a close-by ion beam pipe if the spallation target is surrounded by a moderator of heavy water (D_2O). We present the concept in Section <ref> and the corresponding simulations in Section <ref>. It is feasible to build such a neutron target at facilities like LANSCE at LANL (USA), n_TOF/ISOLDE at CERN (Switzerland), GSI/FAIR (Germany), HIRFL-CSR/HIAF (China), and others. We discuss the possible realizations in Section <ref>.§ CONCEPT AND GEOMETRYThe center of the simulated setup is a tungsten spallation target, see FIG. <ref>. The cylindrical target is mounted inside an evacuated proton beam pipe with a radius of 2.5 cm and aligned in the direction of the proton beam. The protons impinge on the tungsten and produce neutrons. The material of the beam pipe has to be chosen such that it has only minor effects on the neutrons. The neutrons are moderated outside the proton beam pipe by heavy water in a surrounding sphere. A second (ion) beam pipe is orientated perpendicular to the proton beam pipe. The two pipes do not intersect since the ion beam pipe is shifted by x = 7.5 cm off the center of the setup. The neutrons penetrate into the ion beam pipe and serve as a target for the ions. Compared to other elements, tungsten provides a high density of about 19 g cm^-3 combined with a very high melting point of about 3,700 K. We investigated two different tungsten target sizes, the ”small version” with a radius of 1.5 cm and a length of 10 cm, and the ”large version” with a radius of 2.5 cm and length of 50 cm. We chose heavy-water moderators of different sizes with radii 0.5 m, 1.0 m, and 2.0 m. If not specified otherwise, the center of the spherical moderator is also the center of the spallation target. A technical realization would require a cooling of the spallation target. The cooling should be realized with heavy water to avoid changes in the neutron physics investigated here. Most neutrons eventually escape the moderator volume after moderation and have to be absorbed on the outside. Again, this will not alter the neutron budget.§ SIMULATIONS We simulated the proposed setup with GEANT-3.21 <cit.> with the GCALOR package <cit.>. As a first step, neutrons of different energies were emitted isotropically from the center of the tungsten target with the tungsten target in the center of the moderator (section <ref>). These simulations help to understand the underlying principles and allow first rough estimates of the neutron density in the ion beam pipe. In addition, they can be used to estimate the effect of different primary neutron energies. In the second step, we simulated the interaction of a high-energy proton beam with a tungsten target (section <ref>). All particles were followed until they either left the volume or were absorbed. §.§ Neutrons started with different energiesNeutrons with different energies from 10^-2 to 10^9 eV were randomly started in the tungsten spallation target for each moderator thickness (radii 0.5 m, 1.0 m, and 2.0 m). The direction of the neutrons was chosen isotropically. A separate simulation for each energy decade was performed. The energy distribution within a decade was assumed to be 1/E <cit.>. Each simulation run included 10^6 neutrons. The neutrons penetrate the ion beam pipe where they act as a neutron target for the ions. We investigated the average time period a neutron spends in the ion beam pipe. The average time period depends on the velocity of the neutron, the passing angle, and the number of times the neutron actually crosses the ion beam pipe. The time period spent inside the ion tube is summed up for each neutron emitted from the spallation target and stored into a histogram.FIG. <ref> shows examples for neutrons of selected primary energy ranges. When the primary neutron energy increases, the number of events with at least one pass through the beam pipe increases as well as the number of crossings. In addition, very short time period of a few 10^-9 s are only observed for higher primary neutron energies. At lower energies even a single pass requires a few 10^-6 s.FIG. <ref> shows the average time period a neutron spends in the ion beam pipe as a function of its original energy. The period is only weakly dependent on the original energy since the neutrons are quickly moderated and trapped inside the moderator until they finally escape. If the energy exceeds the neutron separation energy of deuterium, about 2.2 MeV, additional neutrons are available. Their time periods inside the beam pipe are added to the time of the original neutron. The larger the moderator volume, the longer the time period inside the ion beam pipe. It is very important to note that the neutrons spend on average several microseconds in the ion beam pipe even for a relatively small moderator of only 50 cm in radius. A moderator of 2-m radius leads to a period of about 20 s. The longer the average time period inside the beam pipe, the higher the density of the neutron target for a given proton current.§.§ Protons energies of 800 MeV and 20 GeVWe simulated the setup with proton energies of 800 MeV and 20 GeV. The chosen energies correspond to the energies the LANSCE accelerator at Los Alamos National Laboratory (800 MeV) <cit.> and the proton synchrotron at CERN (20 GeV) <cit.> deliver. We investigated two different tungsten target sizes, a cylinder with a radius of 1.5 cm and a length of 10 cm, and one with radius of 2.5 cm and a length of 50 cm. The four different simulation settings are listed in TABLE <ref>. The table also gives the neutron yield per proton. FIG. <ref> shows the energy spectra of the produced neutrons for all simulated combinations of target size and proton energy. In general, the number of spallation neutrons increases with the proton energy. As the 20-GeV protons will not be slowed down below the spallation threshold within 10 cm of tungsten, the neutron yield is significantly higher for the larger tungsten target. Each proton produces a certain amount of neutrons (see TABLE <ref>). These neutrons travel through the setup, are moderated by the heavy water, and eventually pass the ion beam pipe. The total time period t_neutron,tot sums up the time periods that all neutrons produced by a proton spend inside the beam pipe. Averaging over all protons, we obtain the average total time period that all neutrons produced by a proton spend inside the ion beam tube t̅_neutron,tot. To estimate the average number of neutrons inside the beam tube, n̅_neutron, we multiply t̅_neutron,tot by the number of protons hitting the spallation target per time (proton current/elementary charge): n̅_neutron = I_proton/et̅_neutron,totFIG. <ref> shows t_neutron,tot for primary protons with different energies and tungsten target sizes, but without a moderator. The averages of the distributions corresponds to t̅_neutron,tot. Protons with an energy of 20 GeV produce more neutrons (see TABLE <ref>). Hence, the total time period the neutrons spend inside the beam pipe is higher compared to the 800-MeV protons. The 20-GeV protons produce about a factor of three more neutrons in the large tungsten target than in the small one. The 800-MeV protons produce only slightly more neutrons in the large target. But then the larger tungsten target acts as a neutron trap. The larger neutron production outweighs the neutron captures only in the case of 20-GeV protons, but not in the case of 800-MeV protons. Therefore, the total time period the produced neutrons spend in the ion beam pipe is shifted to smaller values for 800-MeV protons impinging on the large target. If a moderator is included, the total time period neutrons spend inside the ion beam pipe changes dramatically. FIG. <ref> shows the results for a moderator with a radius of 1 m. The averaged time period all neutrons produced per incident proton spend inside the ion beam pipe (t̅_neutron,tot) is listed in TABLE <ref> for all combinations of proton energy, tungsten target size and moderator radius. FIG. <ref> shows the position distribution of neutrons inside the ion beam pipe. For this plot, the beam axis was sub-divided into discs of 1 cm width in the direction of the beam. Whenever a neutron entered one of the discs, it was recorded in the histogram. The fact that high-energy neutrons don't spend much time inside the beam pipe is considered in FIG. <ref>, where each entry was weighted with the corresponding time the neutron spend in the disc. From the comparison of Figs. <ref> and <ref> one finds that the suppressed high-energy neutrons, which are usually coming directly from the spallation target, can be found mostly in the center of the ion beam pipe, close to the proton beam pipe. FIG. <ref> shows the energy distribution of the neutrons in the ion beam pipe the small tungsten target, 800-MeV protons and different moderator sizes. Without a moderator, the energy distribution resembles the spallation neutron spectrum, compare FIG. <ref>. The moderation process produces neutron spectra with a thermal energy distribution. The larger the moderator, the more neutrons per proton enter the ion beam pipe. In FIG. <ref> shows the energy spectra are weighted with the corresponding time period the neutron spent inside the ion beam pipe. High-energy neutrons pass the pipe quickly. The moderated neutrons spend up to 2 s in the pipe.As can be seen from FIGs. <ref> and <ref>, the neutron spectrum in the ion beam pipe is clearly dominated by low-energy neutrons. Therefore, the center-of-mass energy of the ion-neutron-collision in realistic experiments will be defined by the energy of the ions. The ion energy can easily be tuned in the storage ring by applying, for instance, electron cooling at a defined beam energy.The simulations were repeated with a different tungsten target position relative to the moderator. We moved the target upstream since most of the neutrons are emitted in the direction of the impinging high-energy proton beam (FIG. <ref>), potentially gaining more forward-emitted neutrons. However, the average time period neutrons spend inside the ion beam pipe was reduced. The scattering processes in the moderator alters the original direction of the neutron quickly. Hence, the emission angle of the neutrons is not important. § POSSIBLE REALIZATIONS We discuss possible realizations of the proposed setup at different experimental facilities. We consider the available proton currents and energies to estimate the number of neutrons inside an ion beam tube running through a heavy-water moderator. We estimate the average time period neutrons spend inside the ion beam tube per incoming proton from the proton energy, TABLE <ref> and Equation <ref>. We calculate the areal neutron density η_neutron by dividing the average number of neutrons inside the beam tube n̅_neutron by the cross section of the ion beam pipe A_ion pipe: η_neutron = n̅_neutron/A_ion pipe = I_proton/et̅_neutron,tot/A_ion pipe The set of simulations described here was driven by the specific opportunities of the LANSCE-accelerator at LANL and the PS at CERN. These facilities have already well-established spallation neutron sources with the corresponding driving accelerators. Therefore, they are currently the most promising options. However, the idea of using a heavily moderated spallation source in conjunction with a storage ring is certainly not restricted to these facilities, but may be setup at other institutions like GSI/FAIR, HIRFL, HIAF, NSCL/MSU, MYRRHA. In particular, new developments in cyclotron technology like fixed field alternating gradient (FFAG) machines <cit.> offer the possibility to realize this setup at a dedicated facility. §.§ Los Alamos National Laboratory (LANL) The Lujan target at LANSCE/LANL operates with an average proton current of about 100 A. We consider an ion beam tube with a cross section of 20 cm^2. We obtain a neutron density of about 8 × 10^9 n/cm^2 using the largest moderator with a diameter of 200 cm. The neutron density is only a factor of two less than described in reference <cit.>, where a neutron flux of 10^14 n/cm^2/s in a reactor and an interaction length of 0.5 m was assumed. The results for the different moderator sizes are given in TABLE <ref>.The LANSCE accelerator provides H^- and H^+ ions <cit.>. A magnet bends H^- ions to the southern experimental areas, and H^+ ions to the northern area. Additional facilities, like the proposed combination of an ion storage ring and a neutron target, could be installed at the northern area. Here, H^+ beams with currents up to 1 mA could be delivered. In particular, the nearby isotope production facility (IPF), located at the beginning of the LANSCE accelerator, makes this idea very attractive. The required amount of radioactive material for an experiment in inverse kinematics is orders of magnitude smaller than the amount of material needed for a traditional time-of-flight experiment like DANCE <cit.>. Typically, the storage ring needs to be filled every few minutes with about 10^7 ions <cit.> consuming about about 1.5× 10^10 atoms during a day. §.§ European Organization for Nuclear Research (CERN) The n_TOF/ISOLDE experiments receive aboutprotons/s with an energy of 20 GeV. On average, these protons could produce between 1.5× 10^9 and 10^10 neutrons in an ion beam pipe as described here. The resulting neutron densities of up to 6× 10^8 n/cm^2 in the pipe are listed in TABLE <ref>. The installation of the well-suited ion storage ring TSR at HIE-ISOLDE <cit.> would provide the main ingredients of the proposed setup.§.§ GSI Helmholtz Center for Heavy Ion Research (GSI) and Facility for Antiproton and Ion Research (FAIR) The universal linear accelerator UNILAC and theheavy-ion synchrotron SIS-18 are the driver accelerators at GSI in Darmstadt (Germany). Proton intensities of up to 2.1× 10^11 protons per pulse have routinely been achieved <cit.>. The magnetic rigidity of 18 T·m allows to accelerate protons to energies of 4.5 GeV. The 100-T·m synchrotron of the future FAIR facility <cit.> will provide about 5× 10^12 protons/s with an energy of 28.8 GeV.A variety of specialized facilities like fragment separators <cit.>, storage rings <cit.>,experimental caves, and beam lines <cit.> provides numerous opportunities to realize a low-energy storage ring combined with a spallation target described here. §.§ Heavy Ion Research Facility in Lanzhou (HIRFL) and Cooler Storage Ring (CSR) Complex (IMP) and High Intensity Heavy-Ion Accelerator Facility (HIAF) The Heavy Ion Research Facility in Lanzhou (HIRFL), China, which is similar to GSI, has been operational since 2007 <cit.>. The maximum magnetic rigidity of the main synchrotron ring is 10.6 T·m (maximum proton energy is 2.3 GeV). The existing cooler storage ring CSRe might be employed to store radioactive ions. The High Intensity Heavy-Ion Accelerator Facility (HIAF) has been approved and will be constructed in Huizhou, China. The considered rigidity of the main synchrotron will be around 30 T·m, thus providing about 8 GeV protons. The beam intensities are comparable to those expected at FAIR. Several high-energy as well as low-energy storage rings are being considered for HIAF. Here, the proposed combination of a storage ring with a spallation target could be considered already at the planning stage of the facility. Furthermore, a high power 1.0 to 1.5 GeV dedicated proton linear accelerator is under construction at the same location as HIAF within the running Accelerator-Driven Systems (ADS) project in China. § CONCLUSIONS The combination of an intense neutron source and an ion storage ring would be unique for a direct measurement of neutron-induced reactions, as already discussed in <cit.>. Direct kinematics, where the neutrons impinge on the target of interest, require the measurement of the light reaction products. The γ-rays from the investigated reaction have to be discriminated from the sample decay γ-rays. The proposed technique is based on the detection of the heavy, projectile-like residues of the reactions, which is a significant advantage for (a) the measurement of (n,γ) cross sections of γ-emitting samples or of fissile nuclei, (b) the measurement of (n,p) and (n,α) cross sections at low energies, as there is no need to detect the emitted protons and α-particles, and (c) for cross section measurements on relatively long-lived or even stable nuclei.§ SUMMARY A typical spallation neutron source can be modified to build a neutron target by combination with a large moderator of heavy water. The resulting neutron target, intercepted by an ion beam, can be used to investigate neutron-induced reactions. This technique is of advantage if the sample size in a traditional time-of-flight setup is limited, either because of the decay properties of the investigated isotope or because of the range of the reaction products. In particular, the combination with a storage ring makes this techniques feasible. Existing facilities like LANSCE at LANL, n_TOF/ISOLDE at CERN, or GSI/FAIR could be complemented with the setup described here. This research has received funding from the European Research Council under the European Unions's Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement n. 615126. We thank Hushan Xu for his valuable information about the planned spallation sources in China (ADS and HIAF). apsrev4-1
http://arxiv.org/abs/1704.08689v1
{ "authors": [ "René Reifarth", "Kathrin Göbel", "Tanja Heftrich", "Mario Weigand", "Beatriz Jurado", "Franz Käppeler", "Yuri A. Litvinov" ], "categories": [ "physics.ins-det", "nucl-ex", "physics.acc-ph" ], "primary_category": "physics.ins-det", "published": "20170427110323", "title": "A spallation-based neutron target for direct studies of neutron-induced reactions in inverse kinematics" }
unimib]Stefano Beretta [email protected] novaims]Mauro Castelli [email protected] coimbra]Ivo Gonçalves [email protected] novaims]Roberto Henriques [email protected] stanford]Daniele Ramazzotticor1 [email protected] [cor1]Corresponding author[unimib]DISCo, Universitá degli Studi di Milano-Bicocca. 20126 Milano, Italy [novaims]NOVA Information Management School (NOVA IMS), Universidade Nova de Lisboa, Campus de Campolide, 1070-312 Lisboa, Portugal [coimbra]INESC Coimbra, DEEC, University of Coimbra, Portugal [stanford]Department of Pathology, Stanford University. California, USA One of the most challenging tasks when adopting Bayesian Networks (BNs) is the one of learning their structure from data.This task is complicated by the huge search space of possible solutions, and by the fact that the problem is NP-hard. Hence, full enumeration of all the possible solutions is not always feasible and approximations are often required.However, to the best of our knowledge, a quantitative analysis of the performance and characteristics of the different heuristics to solve this problem has never been done before. For this reason, in this work, we provide a detailed comparison of many different state-of-the-arts methods for structural learning on simulated data considering both BNs with discrete and continuous variables, and with different rates of noise in the data.In particular, we investigate the performance of different widespread scores and algorithmic approaches proposed for the inference and the statistical pitfalls within them.Bayesian Networks Structure Learning Heuristic Search Evolutionary Computation Genetic Algorithms§ INTRODUCTIONBayesian Networks (BNs) have been applied to several different fields, ranging from the water resources management <cit.> to the discovery of gene regulatory networks <cit.>.The task of learning a BN can be divided into two subtasks: (1) structural learning, i.e., identification of the topology of the BN, and (2) parametric learning, i.e., estimation of the numerical parameters (conditional probabilities) for a given network topology.In particular, the most challenging task of the two is the one of learning the structure of a BN.Different methods have been proposed to face this problem, and they can be classified into two categories <cit.>: (1) methods based on detecting conditional independences, also known as constraint-based methods, and (2) score+search methods, also known as score-based approaches.It must be noticed that hybrid methods have also been proposed in <cit.> but, for the sake of clarity, here we limit our discussion to the two mainstream approaches to tackle the task.As discussed in <cit.>, the input of the former algorithms is a set of conditional independence relations between subsets of variables, which are used to build a BN that represents a large percentage (and, whenever possible, all) of these relations <cit.>.However, the number of conditional independence tests that such methods should perform is exponential and, thus, approximation techniques are required. Although constraint-based learning is an interesting approach, as it is close to the semantic of BNs, most of the developed structure learning algorithms fall into the score-based method category, given the possibility of formulating such a task in terms of an optimization problem.As the name implies, these methods have two major components: (1) a scoring metric that measures the quality of every candidate BN with respect to a dataset, and (2) a search procedure to intelligently move through the space of possible networks, as this space is enormous.More in detail, as shown in <cit.>, searching this space for the optimal structure is an NP-hard problem, even when the maximum number of parents for each node is constrained.Hence, regardless of the strategy to learn the structure, one wishes to pursue, greedy local search techniques and heuristic search approaches need to be adopted to tackle the problem of inferring the structure of BNs.However, to the best of our knowledge, a quantitative analysis of the performance of these different techniques has never been done.For this reason, here we provide a detailed assessment of the performance of different state-of-the-arts methods for structural learning on simulated data, considering both BNs with discrete and continuous variables, and with different rates of noise in the data.More precisely, we investigate the characteristics of different scores proposed for the inference, and the statistical pitfalls within both the constraint-based and score-based techniques.Furthermore, we study the impact of various heuristic search approaches, namely , , and . We notice that, although we here aim at covering some of the main ideas in the area of structure learning of BNs, several interesting topics are beyond the scope of this work <cit.>. In particular, we refer to a more general formulation of the problem <cit.>, and we do not consider, for example, contexts where it is possible to exploit prior knowledge in order to make the tasks computationally affordable <cit.>. At the same time, in this work we do not investigate in details performance related to causal interpretations of the BNs <cit.>.This work is structured as follows.In the next two Sections, we provide a background on both Bayesian Networks and Heuristic Search Techniques (seeSection <ref> and Section <ref>).In Section <ref> we describe the results of our study and in Section <ref> we conclude the paper. § BAYESIAN NETWORKSBNs are graphical models representing the joint distribution over a set of n random variables through a direct acyclic graph (DAG) G=(V,E), where the n nodes in V represent the variables, and the arcs E encode any statistical dependence among them.Similarly, the lack of arcs among variables subsumes statistical independence.In this DAG, the set of variables with an arc toward a given node X ∈ V is its set π(X) of “parents”.Formally, a Bayesian Network <cit.> is defined as a pair ⟨ G,θ⟩ over the variables V, with arcs E⊆ V× V and real-valued parameters θ.When the structure of a BN is known, it is possible to compute the joint distribution of all the variables as the product of the conditional distributions on each variable given its parents.p(X_1,...,X_n) = ∏_X_i=1^n p(X_i|π(X_i))p(X_i|π(X_i))=θ_X_i|π(X_i), where θ_X_i|π(X_i) is a probability density function.However, even if we consider the simplest case of binary variables, the number of parameters in each conditional probability table is still exponential in size.For example, in the case of binary variables, the total number of parameters needed to compute the full joint distribution is of size ∑_X ∈ V 2^|π(X)|, where |π(X)| is the cardinality of the parent set of each node.Notice that, if a node does not have parents the total number of parameters to be computed is 1, which corresponds to its marginal probability.Moreover, the usage of the symmetrical notion of conditional dependence poses further limitations to the task of learning the structure of a BN: two arcs A → B andB → A in a network can in fact equivalently denote dependence betweenvariables A and B; this leads to the fact that two DAGshaving a different structure can sometimes model an identical set of independenceand conditional independence relations (I-equivalence). This yields to the concept of Markov equivalence class as a partiallydirected acyclic graph where the arcs that can take either orientation are left undirected <cit.>. In thiscase, all the structures within the Markov equivalence class areequivalently “good” in representing the data, unless a causal interpretationof the BN is given <cit.>.In the literature, there are two families of methods to learn the structure of a BN from data.The idea behind the first group of methods is the one of learning the conditional independence relations of the BN from which, in turn, the network is learned.These methods are often referred to as constraint-based approaches.The second group of methods, the so-calledscore-based approaches, formulates the task of structure learning as an optimization problem, with scores aimed at maximizing the likelihood of the data given the model.However, both the approaches are known to lead to NP-hard formulations and, because of this, heuristic methods need to be used to find near optimal solutions with high probability, in a reasonably small number of iterations.§.§ Constraint-based approachesWe now briefly describe the main idea behind this class of approaches.For a detailed discussion of this topic we refer to <cit.>. This class of methods aims at building a graph structure to reflect the dependence and independence relations in the data that match the empirical distribution.Notwithstanding, the number of conditional independence tests that such algorithms would have to perform among any pair of nodes to test all possible relations is exponential and, because of this, the introduction of some approximations is required. We now provide some details on two constraint-based algorithms that have been proven to be efficient and of widespread usage: the PC algorithm <cit.> and the Incremental Association Markov Blanket (IAMB) <cit.>. The PC algorithm This algorithm <cit.> starts with a fully connected graph and, on the basis of pairwise independence tests, it iteratively removes all the extraneous edges.To avoid an exhaustive search of separating sets, the edges are ordered to consider the correct ones early in the search.Once a separating set is found, the search for that pair ends.The PC algorithm orders the separating sets by increasing values of size l, starting from 0 (the empty set), until l = n-2 (where n is the number of variables).The algorithm stops when every variable has less than l-1 neighbors since it can be proven that all valid sets must have already been chosen.During the computation, the bigger the value of l is, the larger the number of separating sets must be considered.However, as l gets big, the number of nodes with degree l or higher must have dwindled considerably.Thus, in practice, we only need to consider a small subset of all the possible separating sets.Incremental Association Markov Blanket algorithm An other constraint-based learning algorithms uses the Markov blankets <cit.> to restrict the subset of variables to test for independence.Thus, when this knowledge is available in advance, we do not need to test a conditioning on all possible variables.A widely used and efficient algorithm for Markov blanket discovery is IAMB which, for each variable X, keeps track of a hypothesis set H(X), which is the set of nodes that may be parents of X.The goal is, for a given H(X), to obtain at the end of the algorithm, a Markov blanket of X equal to B(X).IAMB consists of a forward and a backward phase.During the forward phase, it adds all the possible variables into H(X) that could be in B(X), while in the backward phase, it removes all the false positive variables from the hypotheses set, leaving the true B(X).The forward phase begins with an empty H(X) for each X.Then, iteratively, variables with a strong association with X (conditioned on all the variables in H(X)) are added to the hypotheses set.This association can be measured by a variety of non-negative functions, such as mutual information.As H(X) grows large enough to include B(X), the other variables in the network will have very little association with X, conditioned on H(X).At this point, the forward phase is complete.The backward phase starts with H(X) that contains B(X) and false positives, which will have small conditional associations, while true positives will associate strongly.Using this test, the backward phase is able to iteratively remove the false positives, until all but the true negatives are eliminated.§.§ Score-based approaches These approaches aim at maximizing the likelihood ℒ of a set of observed data D, which can be computed as the product of the probability of each observation. Since we want to infer a modelG that best explains the observed data, we define the likelihood ofobserving the data given a specific model G as:ℒℒ(G;D) = ∏_d ∈ D P(d|G). Practically, however, for any arbitrary set of data, the most likely graph is always the fully connected one, since adding an edge can only increase the likelihood of the data, i.e., this approach overfits the data.To overcome this limitation, the likelihood score is almost always combined with a regularization term that penalizes the complexity of the model in favor of sparser solutions <cit.>. As already mentioned, such an optimization problem leads to intractability, due to the enormous search space of the valid solutions.Because of this, the optimization task is often solved with heuristic techniques.Before moving on to describe the main heuristic methods employed to face such complexity (see Section <ref>), we now give a short description of a particularly relevant and known score, called Bayesian Information Criterion (BIC) <cit.>, as an example of scoring function adopted by several the score-based methods.The Bayesian Information Criterion BIC uses a score that consists of a log-likelihood term and a regularization term that depend on a model G and data D:bic(G;D) = ℒℒ(G;D) - log m2dim(G)where D denotes the data, m denotes the number of samples, and dim(G) denotes the number of parameters in the model.We recall that in this formulation, the BIC score should be maximized.Since, in general, dim(·) depends on the number of parents of each node, it is a good metric for model complexity.Moreover, each edge added to G increases the complexity of the model.Thus, the regularization term based on dim(·) favors graphs with fewer edges and, more specifically, fewer parents for each node.The term log m/2 essentially weighs the regularization term.The effect is that the higher the weight, the more sparsity will be favored over “explaining” the data through maximum likelihood. Notice that the likelihood is implicitly weighted by the number of data points since each point contributes to the score.As the sample size increases, both the weight of the regularization term and the “weight” of the likelihood increase.However, the weight of the likelihood increases faster than that of the regularization term.This means that, with more data, the likelihood will contribute more to the score, and we may trust our observations more and have less need for regularization.Statistically speaking, BIC is a consistent score <cit.>.Consequently, G contains the same independence relations as those implied by the true structure.§ HEURISTIC SEARCH TECHNIQUESWe now describe some of the main state-of-the-art search strategies that we took into account in this work.In particular, as stated in Section <ref>, we considered the following search methods: , , and . §.§ Hill Climbing(HC) is one of the simplest iterative techniques that have been proposed for solving optimization problems.While HC consists of a simple and intuitive sequence of steps, it is a good search scheme to be used as a baseline for comparing the performance of more advanced optimization techniques.shares with other techniques (like Simulated Annealing <cit.> and  <cit.>) the concept of neighborhood.Search methods based on this latter concept are iterative procedures in which a neighborhood N(i) is defined for each feasible solution i, and the next solution j is searched among the solutions in N(i).Hence, the neighborhood is a function N:S→ 2^S that assigns to each solution in the search space S a (non-empty) subset of S.The sequence of steps of the algorithm, for a maximization problem w.r.t. a given objective function f, are the following: * choose an initial solution i in S;* find the best solution j in N(i) (i.e., the solution j such that f(j)≥ f(k) for every k in N(i);* if f(j) < f(i), then stop; else set i=j and go to Step 2. As it is clear from the aforementioned algorithm, returns a solution that is a local maximum for the problem at hand.This local maximum does not generally correspond to the global minimum for the problem under exam, that is, does not guarantee to return the best possible solution for a given problem.To counteract this limitation, more advanced neighborhood search methods have been defined.One of these methods is , an optimization technique that uses the concept of “memory”. §.§ Tabu Search (TS) is a meta-heuristic that guides a local heuristic search procedure to explore the solution space beyond local optimality.One of the main components of this method is the use of an adaptive memory, which creates a more flexible search behavior.Memory-based strategies are therefore the main feature of TS approaches, founded on a quest for “integrating principles”, by which alternative forms of memory are appropriately combined with effective strategies for exploiting them. Tabus are one of the distinctive elements of TS when compared to or other local search methods.The main idea in considering tabus is to prevent cycling when moving away from local optima through non-improving moves.When this situation occurs, something needs to be done to prevent the search from tracing back its steps to where it came from.This is achieved by declaring tabu (disallowing) moves that reverse the effect of recent moves.For instance, let us consider a problem where solutions are binary strings of a prefixed length, and the neighborhood of a solution i consists of the solutions that can be obtained from i by flipping only one of its bits.In this scenario, if a solution j has been obtained from a solution i by changing one bit b, it is possible to declare a tabu to avoid to flip back the same bit b of j for a certain number of iterations (this number is called the tabu tenure of the move).Tabus are also useful to help in moving the search away from previously visited portions of the search space and, thus, perform more extensive exploration.As reported in <cit.>, tabus are stored in a short-term memory of the search (the tabu list) and usually, only a fixed limited quantity of information is recorded.It is possible to store complete solutions, but this has a negative impact on the computational time required to check whether a move is a tabu or not and, moreover, it requires a vast amount of space.The second option (which is the one commonly used) involves recording the last few transformations performed to obtain the current solution and prohibiting reverse transformations. While tabus represent the main distinguish feature of TS, this feature can introduce other issues in the search process.In particular, the use of tabus can prohibit attractive moves, or it may lead to an overall stagnation of the search process, with a lot of moves that are not allowed.Hence, several methods for revoking a tabu have been defined <cit.>, and they are commonly known as aspiration criteria.The simplest and most commonly used aspiration criterion consists of allowing a move (even if it is tabu) if it results in a solution with an objective value better than that of the current best-known solution (since the new solution has obviously not been previously visited).The basic TS algorithm, considering the maximization of the objective function f, works as follows: * randomly select an initial solution i in the search space S, and set i^* = i and k=0, where i^* is the best solution so far, and k the iteration counter;* set k=k+1 and generate the subset V of the admissible neighborhood solutions of i (i.e., non-tabu or allowed by aspiration);* choose the best j in V and set i = j;* if f(i) > f(i^*), then set i^* = i;* update the tabu and the aspiration conditions;* if a stopping condition is met then stop; else go to Step 2. The most commonly adopted conditions to end the algorithm are when the number of iterations (K) is larger than a maximum number of allowed iterations, or if no changes to the best solution have been performed in the last N iterations (as it is in our tests).Specifically, in our experiments, we modeled for both HC and TS the possible valid solutions in the search space as a binary adjacency matrix describing acyclic directed graphs.The starting point of the search is the empty graph (i.e. without any edge), and the search is stopped when the current best solution cannot be improved with any move in its neighborhood.The algorithms then consider a set of possible solutions (i.e., all the directed acyclic graphs) and navigate among them by means of 2 moves: insertion of a new edge or removal of an edge currently in the structure. We also recall that in the literature many alternatives are proposed to navigate the search space when learning the structure of Bayesian Networks, see <cit.>. But, for the purpose of this work, we preferred to stick with the classical (and simpler) ones.§.§ Genetic Algorithms (GAs) <cit.> are a class of computational models that mimic the process of natural evolution.GAs are often viewed as function optimizers although the range of problems to which GAs have been applied is quite broad.Their peculiarity is that potential solutions that undergo evolution are represented as fixed length strings of characters or numbers.Generation by generation, GAs stochastically transform sets (also called populations) of candidate solutions into new, hopefully improved, populations of solutions, with the goal of finding one solution that suitably solves the problem at hand.The quality of each candidate solution is expressed by using a user-defined objective function called fitness.The search process of GAs is shown in Figure <ref>.To transform a population of candidate solutions into a new one, GAs make use of particular operators that transform the candidate solutions, called genetic operators: crossover and mutation.Crossover is traditionally used to combine the genetic material of two candidate solutions (parents) by swapping a part of one individual (substring) with a part of the other.On the other hand, mutations introduce random changes in the strings representing candidate solutions. In order to be able to use GAs to solve a given optimization problem, candidate solutions must be encoded into strings and often, alsothe genetic operators (crossover and mutation) must be specialized for the considered context.GAs have been widely used to learn the BN structure considering the search space of the DAGs.In the large majority of the works <cit.>, the GA encodes the connectivity matrix of the BN structure in its individuals.This is the same approach used in our study. Our GA follows the common structure:* Generate the initial population* Repeat the following operations until the total number of generations is reached: * Select a population of parents by repeatedly applying the parent selection method* Create a population of offspring by applying the crossover operator to each set of parents* For each offspring, apply one of the mutation operators with a given mutation probability* Select the new population by applying the survivor selection method * Return the best performing individualUnless stated otherwise, the following description is valid for both GA variants (discrete and continuous).The initialization method and the variation operators used in our GA ensure that every individual is valid, i.e., each individual is guaranteed to be an acyclic graph.The initialization method creates individuals with exactly N / 2 connections between variables, where N is the total number of variables.A connection is only added if the graph remains acyclic.The nodes being connected are selected with uniform probability from the set of variables.In the continuous variant, since the input data is normalized, the value associated with each connection is randomly generated between 0.0 and 1.0, with uniform probability.The crossover operates over two parents and returns one offspring.The offspring is initially a copy of the first parent.Afterward, a node with at least one connection is selected from the second parent and all the valid connections starting from this node are added to the offspring.Three mutation operators are considered in our GA implementation: add connection mutation, remove connection mutation, and perturbation mutation.The first two are applied in both GA variants, while the perturbation mutation is only applied in the continuous variant.The offspring resulting from the add connection mutation operator differs from the parent in having an additional connection.The newly connected nodes are selected with uniform probability.In the continuous variant, the value associated with the new connection is randomly generated between 0.0 and 1.0 with uniform probability.Similarly, the offspring resulting from the remove connection mutation operator differs from the parent in having one less connection.The nodes to be disconnected are selected with uniform probability.The perturbation mutation operator applies Gaussian perturbations to the values associated with at most N / 2 connections.The total number of perturbations may be less than N / 2 if the individual being mutated has less than N / 2 connections.Each value is perturbed following a Gaussian distribution having mean 0.0 and standard deviation 1.0.Any resulting value below 0.0 or above 1.0 is bounded to 0.0 or 1.0, respectively.Regardless of the GA variant, mutation is applied with a probability of 0.25.When a mutation is being applied, the specific mutation operator is selected with uniform probability from the available options (two in the discrete variant, and three in the continuous variant).The mutation operator of the GA works by applying the perturbation to the existing values associated to each edge of the solutions. Since such modification to the value should not be highly disruptive, a common choice is to employ a Gaussian probability distribution having mean 0 and standard deviation 1.In terms of parameters, a population of size 100 is used, with the evolutionary process being able to perform 100 generations.Parents are selected by applying a tournament of size 3.The survivor selection is elitist in the sense that it ensures that the best individual survives to the next generation. § RESULTS AND DISCUSSIONWe made use of a large set of simulations on randomly generated datasets with the aim of assessing the characteristics of the state-of-the-art techniques for structure learning of BNs.We generated data for both the case of discrete (2 values, i.e., 0 or 1) and continuous (we used 4 levels, i.e., 1, 2, 3, and 4) random variables.Notice that, for computational reasons, we discretized our continuous variables using only 4 categories.For each of them, we randomly generated both the structure (i.e, 100 weakly connected DAGs having 10 and 15 nodes) and the related parameters (we generated random values in (0,1) for each entry of the conditional probability tables attached to the network structure) to build the simulated BNs.We also considered 3 levels of density of the networks, namely 0.4, 0.6, and 0.8 of the complete graph.For each of these scenarios, we randomly sampled from the BNsseveral datasets of different size, based on the number of nodes.Specifically, for networks of 10 variables, we generated datasets of 10, 50, 100, and 500 samples, while for 15 variables, we considered datasets of 15, 75, 150, and 750 samples.Furthermore, we also considered additional noise in the samples as a set of random entries (both false positives and false negatives) in the dataset.We recall that, based on sample size, the probability distribution encoded in the generated datasets may different from the one subsumed by the related BN. However, here we also consider additional noise (besides the one due to sample size) due, for example, to errors in the observations. We call noise free, the datasets in which such an additional noise is not applied.To this extent, we considered noise free dataset (noise rate equals 0%) and dataset with an error rate of 10% and 20%.In total this led us to a total number of 14400 random datasets.For all of them, we considered both the constraint-based and the score-based approaches to structural learning.From the former category of methods, we considered the PC algorithm <cit.> and the IAMB <cit.>.We recall that these methods return a partially directed graph, leaving undirected the arcs that are not unequivocally directable.In order to have a fair comparison with the score-based method which returns DAGs, we randomly resolved the ambiguities, by generating random solutions (i.e., DAGs) consistent with the statistical constraints by PC and IAMB (that is, we select a random direction for the undirected arcs). Moreover, among the score-based approaches, we consider 3 maximum likelihood scores, namely log-likelihood <cit.>, BIC <cit.>, and AIC <cit.> (for continuous variables we used the corresponding Gaussian likelihood scores).For all of them, we repeated the inference on each configuration by using HC, TS, and GA as search strategies. This led us to 11 different methods to be compared, for a total of 158400 independent experiments.To evaluate the obtained results, we considered both false positives, FP (i.e., arcs that are not in the generative BN but that are inferred by the algorithm) and false negatives, FN (i.e., arcs that are in the generative BN but that are not inferred by the algorithm).Also, with TP (true positives) being the arcs in the generative model and TN (true negatives) the arcs that are not in the generative model, for all the experiments we considered the following metrics:Precision = TP/TP + FP Recall = TP/TP + FN Specificity = TN/TN + FP Accuracy = TP + TN/TP + FP + TN + FN§.§ Results of the Simulations In this Section, we comment on the results of our simulations.As anticipated, we computed precision, recall (sensitivity), and specificity, as well as accuracy and Hamming distance, to assess the performance and the underfit/overfit trade-off of the different approaches.Overall, from the obtained results, it is straightforward to notice that methods including more edges in the inferred networks are also more subject to errors in terms of accuracy, which may also resemble a bias of this metric that tends to penalize solutions with false positive edges rather than false negatives.On the other hand, since a typical goal of the problems involving the inference of BNs is the identification of novel relations (that is, proposing novel edges in the network), underfitting approaches could be more effective in terms of accuracy, but less useful in practice.With a more careful look, the first evidence we obtained from the simulations is that the two parameters with the highest impact on the inferred networks are the density and the number of nodes (i.e., network size).For this reason, we first focused our attention on these two parameters, and we analyzed how the different tested combinations of methods and scores behave. As shown in Figure <ref>, all the approaches seem to perform better when dealing with low-density networks, in fact, for almost all the methods the accuracy is higher for density equal to 0.4, while it is lower for density equal to 0.8.Since each edge of the network is a parameter that has to be “learned”, it is reasonable to think that, the more edges are present in the BN, the harder becomes the problem to be solved when learning it.Moreover, we can also observe that the results in terms of accuracy obtained for BN of discrete variables are slightly better than those achieved on the continuous ones.To this extent, the only outlier is the loglik score combined with and with on datasets with continuous variables, for which the trend is the opposite, w.r.t. that of all the other approaches.In fact, in both these cases, the accuracy is higher on high-density datasets, and this is likely due to a very high overfit of the approach.It is interesting to notice that (combined with the loglik score) is less affected by this problem, and this trend will also be shown in the next analyses.In addition to the accuracy, we also computed the Hamming distance between the reconstructed solutions and the BNs used to generate the data, in order to quantify the errors of the inference process.This analysis showed that, besides the network density, also the number of nodes (parameters to be learned) influences the results, as reported in Figure <ref>.It is interesting to observe that, in the adopted experimental setup, we set the number of samples to be proportional to the number of nodes of the network, that is, for 10 nodes, in the same configurations we have a lower number of samples, compared to the ones for 15 nodes.From a statistical point of view alone, we would expect the problem to be easier when having more samples to build the BN, since this would lead to more statistical power, and, intuitively, should compensate for the fact that with 15 nodes we have more parameters to learn than the ones for the case of 10 nodes.While this may be the case (in fact we have similar accuracy for both 10 and 15 nodes), we also observe a constantly higher Hamming distance with more nodes.In fact, when dealing with more variables we observed a shift in the performance, that is, even when the density is low, we observe more errors manifesting with higher values in terms of Hamming distance.This is due to the fact that, when increasing the number of variables, we also increase the complexity of the solutions. A further analysis we performed was devoted to assessing the impact of both overfit and underfit.As it is possible to observe from the plots in Figure <ref> and Figure <ref>, we obtain two opposite behaviors, namely, the combinations of scores and search strategies show evident different trends in terms of sensitivity (recall), specificity, and precision.In details, iamb and pc tend to underfit, since they both produce networks with consistently low density.While they achieve similar (high) results in terms of accuracy (see Figure <ref>), their trend toward underfit does not make them suited for the identification of novel edges, rather being more indicated for descriptive purposes.On the other hand, the loglik score, (mostly) independently from the adopted search technique, consistently overfits.Both these behaviors can be observed in the violin plots ofFigure <ref> and Figure <ref>: for each density, we can observe that iamb/pc results have very low recall values, while the distribution of the results of the loglik score is centered on higher values.The other two scores, i.e., AIC and BIC, present a better trade-off between overfit and underfit, with BIC being less affected by the overfit, while AIC reconstructing slightly denser networks.Once again, this trade-off between the two regularizators is well-known in the literature and points to AIC being more suited for predictions while BIC for descriptive purposes.Another relevant result of our analysis is the characterization of the performance of in terms of sensitivity (recall).Specifically, it must be noticed that for all the three regularizators, achieve similar results of both precision and specificity if compared with both and , but in terms of sensitivity (recall) it presents reduced overfit.In fact, as it is possible to observe in the plots of Figure <ref> and Figure <ref> for each score (i.e., loglik, AIC, and BIC), the sensitivity results of are lower than that of both and , highlighting the reduced impact of overfit.In summary, we can draw the following conclusions.* (i) dense networks are harder to learn, see Figure <ref>;* (ii) the number of nodes effects the complexity of the solutions, leading to higher Hamming distance (number of errors) even if with similar accuracy, see Figure <ref> and Figure <ref>;* (iii) networks with continuous variables are harder to learn compared to the binary ones, see Figure <ref>;* (iv) iamb and pc algorithms tend to underfit, while loglik overfits, see Figure <ref> and Figure <ref>;* (v) tend to reduce underfit, see Figure <ref> and Figure <ref>.We conclude the section by providing p-values by Mann–Whitney U test <cit.> in support of such claims in Table <ref>.The claim for greater and less values, i.e., accuracy, are performed with the one-tail test alternatives.The tests are performed on all the configurations of our simulations. § CONCLUSIONSBayesian Networks are a widespread technique to model dependencies among random variables.A challenging problem when dealing with BNs is the one of learning their structures, i.e., the statistical dependencies in the data, which may sometime pose a serious limit to the reliability of the results. Despite their extensive use in a vast set of fields, to the best of our knowledge, a quantitative assessment of the performance of different state-of-the-art methods to learn the structure of BNs has never been performed.In this work, we aim at going in the direction of filling this gap and we presented a study of different state-of-the-art approaches for structural learning of Bayesian Networks on simulated data, considering both discrete and continuous variables.To this extent, we investigated the characteristics of 3 different likelihood score, combined with 3 commonly used search strategies schemes (namely, , , and ), as well as 2 constraint-based techniques for the BN inference (i.e., iamb and pc algorithms).Our analysis identified the factors having the highest impact on the performance, that is, density, number of variables, and variable type (discrete vs continuous).In fact, as shown here, these settings affect the number of parameters to be learned, hence complicating the optimization task.Furthermore, we also discussed the overfit/underfit trade-off of the different tested techniques with the constraint-based approaches showing trends toward underfitting, and the loglik score showing high overfit.Interestingly, in all the configurations, showed evidence of reducing the overfit, leading to denser structures. Overall, we place our work as a starting effort to better characterize the task of learning the structure of Bayesian Networks from data, which may lead in the future to a more effective application of this approach. In particular, we focused on the more general task of learning the structure of a BN <cit.>, and we did not dwell on several interesting domain-specific topics, which we leave for future investigations <cit.>.§ ACKNOWLEDGMENTSThis work was also financed through the Regional Operational Programme CENTRO2020 within the scope of the project CENTRO-01-0145-FEDER-000006.§ REFERENCESmodel1-num-names
http://arxiv.org/abs/1704.08676v2
{ "authors": [ "Stefano Beretta", "Mauro Castelli", "Ivo Goncalves", "Roberto Henriques", "Daniele Ramazzotti" ], "categories": [ "cs.LG", "cs.AI", "stat.ML" ], "primary_category": "cs.LG", "published": "20170427174022", "title": "Learning the structure of Bayesian Networks: A quantitative assessment of the effect of different algorithmic schemes" }
We report on a quantum electrodynamic (QED) investigation of the interaction between a ground state atom with another atom in an excited state. General expressions, applicable to any atom, are indicated for thelong-range tails which are due to virtual resonant emission and absorption into and from vacuum modes whose frequency equalsthe transition frequency to available lower-lying atomic states. For identical atoms, one of which is in an excited state, we also discuss the mixing term which depends on the symmetry of the two-atom wave function (these evolve into either the gerade or theungerade state for close approach), and we include all nonresonant states in our rigorous QED treatment.In order to illustrate the findings, we analyze the fine-structure resolvedinteractionfor nD–1S hydrogen interactions withn=8, 10, 12 and find surprisingly large numerical coefficients. 31.30.jh,12.20.Ds,31.15.-p,34.20.CfVirtual Resonant Emission and Oscillatory Long–Range Tails inInteractions of Excited States: QED Treatment and Applications V. Debierre December 30, 2023 ===============================================================================================================================Introduction.—While the long-range interaction between two ground state atoms has been fully understood in all interatomic separation regimes since the work of Casimir and Polder <cit.>, a completely new situation arises when one of the atoms is in an excited state <cit.>. In particular, several recentstudies <cit.> have reported on long-range, spacewise-oscillating tails, which decay as slowly as R^-2 (R is the interatomic separation). For excited reference states, these tails parametrically dominate over the usual Casimir-Polder type R^-7 interaction <cit.>. Conflicting results have been obtainedfor the oscillating tails <cit.>. One important question concerns the ratio ofthe oscillatory, resonant terms to thenon-oscillatory, nonresonant contributions tointeractions, and the matching and interpolation of the results with the familiar close-range, nonretardedlimit of the interatomic interaction. Our aim here is to advance the theoryof excited-state interatomic interactions, by including the nonresonant states,the dynamically induced correction to theatomic decay width (distance-dependent imaginary partof the energy shift), and the additional terms that occur for identical atoms (namely, the gerade-ungerade mixing term <cit.>).As an example application, we study a system where ahighly excited D state interacts witha ground state (1S) hydrogen atom (see Refs. <cit.>). In this system, the availability of low-energy P and F states for virtual dipole transitions from the nD statemakes the oscillating long-range tails relevant. An improved understanding is necessaryfor the interpretation of experiments involving general Rydberg states <cit.>, in regard to the determination of fundamental constants. We concentrate on nD–1S interactions with n=8,10,12. The projection- and symmetry-averaged C_6 van der Waals coefficient of the 12D–1S system amounts to a surprisingly large numerical value ⟨ C_6(12D;1S) ⟩ = 227 756 in atomic units. SI mksA units are used throughout this Letter.Formalism.—The general idea behind thematching of the scattering amplitude and the effective Hamiltonian has been described in Chap. 85 of Ref. <cit.>, in the context of the interatomic interaction. In short, one uses the relation ⟨ψ'_A, ψ'_B | U( r⃗_A, r⃗_B, R⃗ ) | ψ_A, ψ_B ⟩ = ħ/TS_A'B'AB , where | ψ_A, ψ_B ⟩ is the initial state of the two-atom system,| ψ'_A, ψ'_B ⟩ is the final state, and H_ eff = U( r⃗_A, r⃗_B, R⃗ ) is the effective potential which depends on the electron coordinates r⃗_i (where i= A,Bdenotes the atom). The interatomic distance vector is R⃗. Finally, T is the long time interval whichresults from the integration over the interaction in theS matrix formalism [see Eq. (85.2) of Ref. <cit.>].It becomes necessary to generalize thetreatment outlined in Eqs. (85.1) to (85.14)of Ref. <cit.> to the case of identical atoms, one of which isin an excited state. In this case,one has to treat a mixing term <cit.>, which describesa scattering process in which the state | ψ_A, ψ_B ⟩is scattered into the state | ψ_B, ψ_A ⟩; the two atoms in this case “interchange” their quantum states. The eigenstates of theHamiltonian <cit.> are states of the form (1/√(2))( | ψ_A, ψ_B ⟩± | ψ_B, ψ_A ⟩), and the interaction energy Δ E is thesum of a direct term (which is contained in thecanonical derivations, e.g., Ref. <cit.>),and a mixing term, which is added here andwhose sign depends on the symmetry of the two-atom state (±). We find the following general expression (further details can be found in the supplementary material, Ref. <cit.>), including retardation, for the electrodynamic interaction between two atoms A and B in arbitrary states, Δ E =/ħ∫_0^∞ω/2 π ω^4 D_ij(ω, R⃗) D_kℓ(ω, R⃗)×[α_A,ik(ω)α_B,jℓ(ω)±α_AB,ik(ω)α_AB,jℓ(ω)], where the last term describes the mixing and is present only for identical atoms. The photon propagator (in the temporal gauge)and the tensor polarizabilities are given byD_ij(ω, R⃗) = ħ/4πϵ_0 c^2[ α_ij + β_ijf(ω, R) ] ^√(ω^2 +ϵ)/cR/R , α_A,ij(ω) =∑_v( < ψ_A | d_Ai| v_A >< v_A | d_Aj| ψ_A > /E_v,A - ħω -ϵ. . + < ψ_A | d_Ai| v_A >< v_A | d_Aj| ψ_A > /E_v,A + ħω -ϵ), α_AB,ij(ω) =∑_v( < ψ_A | d_Ai| v_A >< v_B | d_Bj| ψ_B > /E_v,A - ħω -ϵ. . + < ψ_A | d_Ai| v_A >< v_B | d_Bj| ψ_B > /E_v,A + ħω -ϵ).Here, f(ω, R) =c/|ω| R - c^2/ω^2R^2, and the tensor structures areα_ij = δ_ij - R_i R_j/R^2 and β_ij = δ_ij - 3 R_i R_j/R^2. The speed of light is c, and ϵ_0 is the vacuum permittivity. The (excited) state of atom A is |ψ_A>, and d⃗_A is the electric dipole operator for the same atom. We also write E_v,A≡ E_v-E_A and E_v,B≡ E_v-E_B. As usual, the dipole polarizability is given by a sum over all virtual states of atom A which are accessible from |ψ_A> through an electric dipole transition. The tensor polarizabilityα_AB,ij(ω) is obtained from α_AB,ij(ω) by a replacementE_v,A→ E_v,B in the propagator denominators. For excited reference states, it is crucial that the polarizabilities (<ref>) and (<ref>) have the poles placed according to theFeynman prescription; this follows from the time-ordereddipole operators which naturally occurin time-ordered products of the interaction Hamiltonian in the S matrix.If atom A is in an excited state and B in the ground state,then the interaction energy Δ E =+W [see (<ref>)] can be splitinto a Wick-rotated term (ω→ω) = -1/ħ∫_0^∞ω/2 π ω^4 D_ij(ω, R⃗) D_kℓ(ω, R⃗) × [α_A,ik(ω)α_B,jℓ(ω)±α_AB,ik(ω)α_AB,jℓ(ω) ], and a pole term from the residues at ω = -E_m,A/ħ + ϵ,=∑_E_m,A < 0< ψ_A | d_Ai| m_A > /(4 πϵ_0)^2 R^6[ < m_A | d_Ak| ψ_A >α_B,jℓ(E_m,A/ħ) ±< m_A | d_Bk| ψ_B >α_AB,jℓ( E_m,A/ħ) ] f_ijkℓ(r_m,A),f_ijkℓ(r) = -exp(-2r)[β_ij β_kℓ ( 1 + 2 r ) - (2 α_ij β_kℓ + β_ij β_k ℓ) r^2 - 2 α_ij β_kℓr^3 + α_ij α_kℓr^4],Re f_ijkℓ(r) = -cos(2 r ) [β_ij β_kℓ -( 2 α_ij β_kℓ + β_ij β_kℓ) r^2 + α_ij α_kℓr^4 ]- 2 rsin( r ) [ β_ij β_kℓ - α_ij β_kℓr^2],Im f_ijkℓ(r) = -12{ -2 sin(2 r ) [β_ij β_kℓ -( 2 α_ij β_kℓ + β_ij β_kℓ) r^2 + α_ij α_kℓr^4 ]+ 4 rcos( r ) [ β_ij β_kℓ - α_ij β_kℓr^2 ]} , where r_m,A = E_m,AR/ħ c ,E_m,A≡ E_m_A-E_A. Here, the sum is taken over all states m that are accessible from |ψ_A> by a dipole transition and of lower energy than |ψ_A>.For a general atom, the generalization istrivial: one simply sums the dipole operators of atomA over the electrons.The pole term induces both a real, oscillatory, distance-dependent energy shift as well as a correction to the width of the excited state, =- /2Γ , where Γ is obtained from Eq. (<ref>) by substituting for f_ijkℓ(r) theexpression in curly brackets in Eq. (<ref>).From a QED point of view, the real part of theenergy shift corresponding to the pole term is due to a very peculiar process, namely, resonant virtual emission into vacuum modes whose angular frequency matches the resonancecondition ω = -E_m,A = | E_m,A |. The resonant emission is accompanied by resonantabsorption, and therefore leads to a real ratherthan imaginary energy shift. In the ladder andcrossed-ladder Feynman diagrams (see Fig. <ref>), the virtual electron line of atom A, in state |ψ_A⟩, would turn into a resonant lower-lying virtual state,whereas the ground-state atom line is excited into a“normal” energetically higher virtual state |v_B⟩. The imaginary part of the pole term describes aprocess where thevirtual photon becomes real and is emitted by theatom, in analogy tothe imaginary part of the self energy <cit.>. Feynman propagators allow us to reduce the calculation to only two Feynman diagrams, which capture all possible time orderings (in contrast to time-ordered perturbation theory).In Ref. <cit.>, a situation of two non-identical atoms is considered, with mutually close resonance energies ħω_A and ħω_B. Setting E_m,A = -ħω_A and E_q,B = ħω_B, the authors of Ref. <cit.> assume that ω_A ≈ω_B, and define Δ_AB = ħω_A - ħω_B with | Δ_AB | ≪ħω_A, ħω_B. Furthermore, they restrict the sum over virtual states in Eq. (<ref>) to the resonant state only, and they only keep the term 1/(E_m,A + E_q,B) in the sum over virtual states, in the polarizabilityα_B(E_m,A/ħ) [see Eq. (<ref>)]. Under their assumptions [see Eq. (4) of Ref. <cit.>], | 1/(E_m,A + E_q,B) | = | -1/Δ_AB| ≫| 1/(E_q,B - E_m,A) | ≈ 1/(2ħω_B).Under the restriction to theresonant virtual states,the direct term in Eq. (<ref>) [proportional toα_B,jℓ(E_m,A/ħ)] matches that reported in Ref. <cit.> if we average the latter over the interaction time T > 2 R/c.Our result adds thecontribution from nonresonant virtualstates, which allow us to match theresult with the close-range () limit,as well as the mixing term [proportional toα_AB,jℓ( E_m,A/ħ)] and the imaginary part of the energy shift (width term). For the mixing term to be nonzero, we need the orbital angular momentum quantum numbers to fulfill the relation ℓ_A=ℓ_B or |ℓ_A-ℓ_B|=2, by virtue of the usual selection rules of atomic physics. Furthermore, we find that the full consideration of theWick-rotated term and the pole term is crucial forobtaining numerically correct results for theinteraction energies, for surprisingly large interatomicdistances.Numerical Calculations.—In the following,we aim to apply the developed formalism to nD–1S atomic hydrogen systems.The interaction energy depends both on thespin orientation of the electron (total angularmomentum J) as well as its projection μ onto the quantization axis <cit.>. One may eliminate this dependence by evaluating the average over the fine-structureresolved states. Short Range.—For interatomic separations in the range a_0≪ R≪ a_0/α (where a_0 is the Bohr radius), the interaction energy (<ref>) is well approximated as Δ E ≈Δ E_ vdW where Δ E_ vdW =- 1/(4 πϵ_0)^2β_ij β_kℓ/R^6 ∑_v∑_q1/E_v,A + E_q,B×[ < ψ_A | d_Ai| v_A > < v_A | d_Ak| ψ_A>.×< ψ_B | d_Bj| q_B > < q_B | d_Bℓ| ψ_B > ±< ψ_A | d_Ai| v_A > < v_B | d_Bk| ψ_B> .×< ψ_B | d_Bj| q_B > < q_A | d_Aℓ| ψ_A >], =-1/R^6(D_6(A;B)± M_6(A;B)). Here, D_6 is the direct, and M_6 is the mixingcoefficient <cit.>. For energetically lower states in atom A (with E_v,A = E_m,A < 0), the representation (<ref>) is obtained by carefully considering the contributions from the Wick-rotated termand the pole term .For the fine-structure average of thedirect term D_6, we have ⟨ D_6(nD;1S) ⟩ = ⟨ D_6^(P)(nD;1S) ⟩ + ⟨ D_6^(F)(nD;1S) ⟩ , where the virtual-P-state contribution⟨ D_6^(P)(nD;1S) ⟩ and the virtual-F-state contribution⟨ D_6^(F)(nD;1S) ⟩ are given in Table <ref>. Numerically, we find that the mixing term M_6 is smaller than the direct term D_6, by at least four orders of magnitude, for all fine-structure resolved nD states, for all distance ranges investigated in this Letter. This trend follows the pattern observed for thecoefficients (Table <ref>) and is in contrast to the 2S–1S system, where both terms are of comparable magnitude <cit.>.Long Range.—One might think that the1/R^2 pole term from Eq. (<ref>) should easily dominate the interaction energyin the range R ≫ a_0/α. Indeed, power countingin the fine-structure constant α, according to Ref. <cit.>, shows thatthe cosine and sine terms in are asymptotically given by _ cos/E_h∼{α^4/ρ^2cos(α ρ), α^2/ρ^4cos(α ρ), 1/ρ^6cos(α ρ)} , _ sin/E_h∼{α^3/ρ^3sin(α ρ), α/ρ^5sin(α ρ)} , where ρ = R/a_0 and E_h is theHartree energy. A comparison to theterm, given in Eq. (<ref>), and the Wick-rotated term (<ref>), Δ E_ vdW∼E_h/ρ^6 ,∼^ρ≫ 1/α E_h/α ρ^7 ,shows that all terms (pole term, Wick-rotated, and) are of the same order-of-magnitude forρ∼ 1/α, while the pole term should parametrically dominatefor ρ > 1/α. However, this consideration does nottake into account the scaling of theterms with the principal quantum number n. While we find that the D_6 coefficientstypically grow as n^4 for a given manifold ofstates (see also Ref. <cit.>), the energy differences E_m,A foradjacent lower-lying statesare proportional to 1/(n-1)^2 - 1/n^2 ∼ 1/n^3for large n, and the fourth power of the energy difference E_m,A enters the prefactorof the 1/R^2 pole term. Hence,it is interesting to compare the parametric estimates to a concrete calculationfor excited nD states; this is also importantin order to gauge the importance of the nonresonantcontributions to the interaction energy which were left out in Ref. <cit.>.But let us first write down the leading asymptotic terms, for all long-range contributions of interest. For R ≫ħ c/,whereis the Lamb shift energy of about 1 GHz (see Ref. <cit.>), the Wick-rotated contribution attains thefamiliar 1/R^7 asymptotics from theCasimir–Polder formalism <cit.>, ^(dir)=^R →∞-ħ c/8πα_nD,ij(0) α_1S(0)/ (4 πϵ_0)^2 R^7(13 δ_ij+15R_iR_j/R^2). This tail is parametricallysuppressed in comparison tothe leading 1/R^2 pole contribution, ^(dir)=^R →∞ -∑_E_m<E_nD(E_m,A/ħ c)^4cos(2E_m,A/ħ cR)/(4 πϵ_0)^2R^2 ×α_ij< nD | d_Ai| m_A > < m_A | d_Aj| nD >α_1S(E_m,A/ħ) . In the intermediate range a_0 ≪ R ≪ħ c/, the Wick-rotated contribution has a nonretarded1/R^6 tail, due to a nonretarded contribution from virtual nP and nF states which are displaced from the nD state onlyby the Lamb shift [see Eqs. (23) and (24) ofRef. <cit.>], ^(dir)∼ - D_6(nD;1S)/R^6 , a_0/α≪ R ≪ħ c/ . The fine-structure average of D_6(nD;1S) is given by <cit.> ⟨D_6(nD;1S)⟩ = 81/8 n^2 (n^2 - 7). For the mixing term, simplifications are scarce; theleading long-range asymptotics of theWick-rotated term read ^(mix)=^R →∞-ħ c/8πα_nD 1S,ik(0)α_nD 1S,jℓ(0) / (4 πϵ_0)^2 R^7×( 3 α_ijα_kl+ 5 α_ijβ_kl+ 5 β_ijβ_kl). The leading pole contribution (in the long range) is given by a sum over virtual P states which enter the mixed polarizability α_nD 1S,jℓ, ^(mix)=^R →∞ -∑_E_m<E_nD/(4 πϵ_0)^2(E_m,A/ħ c)^4 cos(2E_m,A/ħ cR)/R^2 ×α_ij α_kl α_nD 1S,jℓ(E_m,A/ħ) × < nD | d_Ai| m_A > < m_A | d_Ak| 1S >. In the intermediate range, one has ^(mix)∼ - M_6(nD;1S)/R^6 , a_0/α≪ R ≪ħ c/ , where M_6(nD;1S) is the generalizationof D_6 to the mixing term [see Ref. <cit.> and Eq. (67) of Ref. <cit.>].In Fig. <ref>, we compare the magnitude of theWick-rotated term and the pole term in the intermediaterange a_0/α≪ R ≪ħ c/, and forvery large separations R ∼ħ c/. While a parametric analysis [Eq. (<ref>)] would suggest dominance of the pole term in the intermediaterange, a numerical calculation reveals a different behavior,with the Wick-rotated term dominating the interaction, due to the variability of the numerical coefficients multiplying the parametric estimates given in Eq. (<ref>).Conclusions.—We have shown thatthe consistent use of Feynman propagatorsand the concomitant virtual photon integration contours lead to the prediction of long-range tails for excited-state interactions. Pole terms are picked up for virtual states |v_A> = |m_A> of lower energy than the reference state of the excited atom A.The pole contributionto the energyshift is complex rather than real (includes a width term Γ = -2 Im), is spacewise-oscillating and in the long-range, behaves as cos[2 (E_m-E_A) R/(ħ c)]/R^2, where E_A is the reference state energy and E_m<E_A that of the low-energy virtual state. For excited states, both the direct as well as the exchange (gerade-ungerade mixing) term can be expressed as a sum of a Wick-rotatedcontribution [Eq. (<ref>)], and a pole term [Eq. (<ref>)]. Our inclusion of the nonresonant terms in the interaction energy enables us to match the very-long-range, oscillatory result against the well-known close-range, nonretarded van der Waals limit,and to carry out numerical calculations in the intermediate region. We also include the width term, and the gerade-ungerade mixing term which pertains toexcited-state interactions of identical atoms.For nD–1S interactions, we have shown that despite parametric suppression, the Wick-rotated term, which is non-oscillatory and contains the non-resonant states, still dominates in the intermediate distancerange a_0/α≲ R ≪ħ c/ (see Fig. <ref>). The very-long-range, oscillatory tail of theinteractionis relevant only for very large interatomic distances. This conclusion holds for nD–1S interactions as well as nS–1S systems <cit.>.The reason for the suppression is that thenumerical coefficients which multiply theparametric estimates given inEq. (<ref>) drastically depend on theparticular term in theenergy. This is in part due to the scaling of the coefficients with the principal quantum number. E.g., for nD–1S interactions, the 1/R^2 leadingoscillatory tail from Eq. (<ref>)is of order E_hα^4cos(α ρ) / ρ^2, yet multiplied by numerical coefficients of order 10^-6 [in addition to the factor α^4; see the supplementary material <cit.>, Eq. (<ref>), Table <ref> and Fig. <ref>]. By contrast, the non-oscillatory terms oforder E_h/ρ^6 are multiplied bycoefficients of order 10^4 … 10^6. This behavior of the coefficients changes any predictions based onthe parametric estimates given in Eq. (<ref>)by ten orders of magnitude as compared toa situation with coefficients of order unity. Our results are important for an improvedanalysis of pressure shifts, andfluctuating-dipole-induced energy shift,for atomic beam spectroscopy with Rydberg states, where these effects have been identified as notoriouslyproblematic in recent years (see pp. 134 and 151 of Ref. <cit.>). An improved determination of the Rydberg constant based on Rydberg-state spectroscopy couldresolve the muonic hydrogen proton radius puzzle, because the smaller proton radius measuredin Ref. <cit.> leads toa Rydberg constant which is discrepant with regard to the current CODATA value <cit.>.Acknowledgments.—This research has been supportedby the NSF (grant PHY–1403937).26 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Casimir and Polder(1948)]CaPo1948 author author H. B. G.Casimir and author D. Polder, title title The Influence of Radiation on the London-van-der-Waals Forces,@noopjournal journal Phys. Rev.volume 73, pages 360–372 (year 1948)NoStop [Chibisov(1972)]Ch1972 author author M. I. Chibisov, title title Dispersion Interaction of Neutral Atoms, @noopjournal journal Opt. Spectrosc. volume 32, pages 1–3 (year 1972)NoStop [Deal and Young(1973)]DeYo1973 author author W. J. Deal and author R. H. Young, title title Long–Range Dispersion Interactions Involving Excited Atoms; the H(1s)—H(2s) Interaction, @noopjournal journal Int. J. Quantum Chem. volume 7,pages 877–892 (year 1973)NoStop [Adhikari et al.(2017)Adhikari, Debierre, Matveev, Kolachevsky, and Jentschura]AdEtAl2017vdWi author author C. M. Adhikari, author V. Debierre, author A. Matveev, author N. Kolachevsky,and author U. D. Jentschura, title title Long-range interactions of hydrogen atoms in excited states. I. 2S–1S interactions and Dirac–δ perturbations, @noopjournal journal Phys. Rev. A volume 95,pages 022703 (year 2017)NoStop [Jentschura et al.(2017a)Jentschura, Debierre, Adhikari, Matveev, andKolachevsky]JeEtAl2017vdWii author author U. D. Jentschura, author V. Debierre, author C. M. Adhikari, author A. Matveev, and author N. Kolachevsky,title title Long-range interactions of excited hydrogen atoms. II. Hyperfine-resolved 2S–2S system, @noopjournal journal Phys. Rev. A volume 95, pages 022704 (year 2017a)NoStop [Power and Thirunamachandran(1995)]PoTh1995 author author E. A. Power and author T. Thirunamachandran, title title Dispersion forces between molecules with one or both molecules excited, @noopjournal journal Phys. Rev. A volume 51, pages 3660–3666 (year 1995)NoStop [Safari et al.(2006)Safari, Buhmann, Welsch, and Dung]SaBuWeDu2006 author author H. Safari, author S. Y. Buhmann, author D.-G. Welsch,andauthor H. T. Dung, title title Body-assisted van der Waals interaction between two atoms, @noopjournal journal Phys. Rev. A volume 74,pages 042101 (year 2006)NoStop [Safari and Karimpour(2015)]SaKa2015 author author H. Safari and author M. R. Karimpour, title title Body-Assisted van der Waals Interaction between Excited Atoms,@noopjournal journal Phys. Rev. Lett.volume 114, pages 013201 (year 2015)NoStop [Berman(2015)]Be2015 author author P. R. Berman, title title Interaction energy of nonidentical atoms, @noopjournal journal Phys. Rev. A volume 91, pages 042127 (year 2015)NoStop [Milonni and Rafsanjani(2015)]MiRa2015 author author P. W. Milonni and author S. M. H. Rafsanjani, title title Distance dependence of two-atom dipole interactions with one atom in an excited state, @noopjournal journal Phys. Rev. A volume 92, pages 062711 (year 2015)NoStop [Donaire et al.(2015)Donaire, Guérout, and Lambrecht]DoGuLa2015 author author M. Donaire, author R. Guérout,and author A. Lambrecht, title title Quasiresonant van der Waals Interaction between Nonidentical Atoms, @noopjournal journal Phys. Rev. Lett. volume 115, pages 033201 (year 2015)NoStop [Donaire(2016)]Do2016 author author M. Donaire, title title Two-atom interaction energies with one atom in an excited state: van der Waals potentials versus level shifts, @noopjournal journal Phys. Rev. A volume 93,pages 052706 (year 2016)NoStop [Gomberoff et al.(1966)Gomberoff, McLone, and Power]GoMLPo1966 author author L. Gomberoff, author R. R. McLone,and author E. A. Power, title title Long–Range Retarded Potentials between Molecules, @noopjournal journal J. Chem. Phys. volume 44, pages 4148–4153 (year 1966)NoStop [Biraben et al.(1989)Biraben, Garreau, Julien, andAllegrini]BiGaJuAl1989 author author F. Biraben, author J.-C. Garreau, author L. Julien, and author M. Allegrini,title title New Measurement of the Rydberg Constant by Two-Photon Spectroscopy of Hydrogen Rydberg States, @noopjournal journal Phys. Rev. Lett. volume 62, pages 621–621 (year 1989)NoStop [de Beauvoir et al.(1997)de Beauvoir, Nez, Julien, Cagnac, Biraben, Touahri, Hilico, Acef, Clairon, and Zondy]BeEtAl1997 author author B. de Beauvoir, author F. Nez, author L. Julien, author B. Cagnac, author F. Biraben, author D. Touahri, author L. Hilico, author O. Acef, author A. Clairon,and author J. J. Zondy, title title Absolute Frequency Measurement of the 2S–8S/D Transitions in Hydrogen and Deuterium: New Determination of the Rydberg Constant, @noopjournal journal Phys. Rev. Lett. volume 78, pages 440–443 (year 1997)NoStop [Schwob et al.(1999)Schwob, Jozefowski, de Beauvoir, Hilico, Nez, Julien, Biraben, Acef, Zondy, and Clairon]ScEtAl1999 author author C. Schwob, author L. Jozefowski, author B. de Beauvoir, author L. Hilico, author F. Nez, author L. Julien, author F. Biraben, author O. Acef, author J. J.Zondy,and author A. Clairon, title title Optical Frequency Measurement of the 2S-12D Transitions in Hydrogen and Deuterium: Rydberg Constant and Lamb Shift Determinations, @noopjournal journal Phys. Rev. Lett. volume 82, pages 4960–4963 (year 1999), note [Erratum Phys. Rev. 86, 4193 (2001)]NoStop [deVries()]dV2002 author author J. C. deVries, @noophowpublished Ph.D. thesis, Massachusetts Institute of Technology, Cambridge, MA (2002), URL https://dspace.mit.edu/handle/1721.1/4108NoStop [Berestetskii et al.(1982)Berestetskii, Lifshitz, and Pitaevskii]BeLiPi1982vol4 author author V. B. Berestetskii, author E. M. Lifshitz,and author L. P. Pitaevskii, @nooptitle Quantum Electrodynamics, Volume 4 of the Course on Theoretical Physics, edition 2nd ed. (publisher Pergamon Press, address Oxford, UK, year 1982)NoStop [Jentschura et al.(2017b)Jentschura, Adhikari, and Debierre]JeAdDe2017suppl author author U. D. Jentschura, author C. M. Adhikari,and author V. Debierre, @noophowpublished Long–Range Tails inInteractions of Excited–State Atoms: Mixing Terms, Notes and Derivations [Supplementary Material for Physical Review Letters] (year 2017b)NoStop [Bethe(1947)]Be1947 author author H. A. Bethe, title title The Electromagnetic Shift of Energy Levels, @noopjournal journal Phys. Rev. volume 72, pages 339–341 (year 1947)NoStop [Barbieri and Sucher(1978)]BaSu1978 author author R. Barbieri and author J. Sucher, title title General Theory of Radiative Corrections to Atomic Decay Rates, @noopjournal journal Nucl. Phys. B volume 134, pages 155–168 (year 1978)NoStop [Bethe and Salpeter(1957)]BeSa1957 author author H. A. Bethe and author E. E. Salpeter, @nooptitle Quantum Mechanics of One- and Two-Electron Atoms (publisher Springer, address Berlin, year 1957)NoStop [C. M. Adhikari et al.(2017)]AdEtAl2017vdWiii author author C. M. Adhikari et al., @noophowpublished Long-range interactions of hydrogen atoms in excited states. III. nS–1S interactions for n ≥ 3, in preparation (year 2017)NoStop [R. Pohl et al. [CREMA Collaboration](2010)]PoEtAl2010CREMA author author R. Pohl et al. [CREMA Collaboration], title title The size of the proton, @noopjournal journal Nature (London) volume 466, pages 213–216 (year 2010)NoStop [R. Pohl et al. [CREMA Collaboration](2016)]PoEtAl2016CREMA author author R. Pohl et al. [CREMA Collaboration], title title Laser spectroscopy of muonic deuterium, @noopjournal journal Science volume 353, pages 669–673 (year 2016)NoStop [Mohr et al.(2016)Mohr, Newell, and Taylor]MoNeTa2016 author author P. J. Mohr, author D. B. Newell, and author B. N. Taylor,title title CODATA Recommended Values of the Fundamental Physical Constants: 2014, @noopjournal journal Rev. Mod. Phys. volume 88, pages 035009 (year 2016)NoStop
http://arxiv.org/abs/1704.08151v1
{ "authors": [ "U. D. Jentschura", "C. M. Adhikari", "V. Debierre" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20170426145840", "title": "Virtual Resonant Emission and Oscillatory Long-Range Tails in van der Waals Interactions of Excited States: QED Treatment and Applications" }
=0.30inThe Graovac-Pisanski index of armchair nanotubes Niko Tratnik^a, Petra Žigert Pleteršek^a,b^aFaculty of Natural Sciences and Mathematics, University of Maribor,Slovenia ^bFaculty of Chemistry and Chemical Engineering, University of Maribor, Sloveniae-mail: [email protected], [email protected](December 30, 2023) Abstract The Graovac-Pisanski index, which is also called the modified Wiener index, considers the symmetries and the distances in molecular graphs. Carbon nanotubes are molecules made of carbon with a cylindrical structure possessing unusual valuable properties. In a mathematical model we can consider them as a subgraph of a hexagonal lattice embedded on a cylinder with some vertices being identified. In the present paper, we investigate the automorphisms and the orbits of armchair nanotubes and derive the closed formulas for their Graovac-Pisanski index.=0.30in Key words: modified Wiener index; Graovac-Pisanski index; armchair nanotube; carbon nanotube; graph distance; automorphism group§ INTRODUCTION Theoretical molecular descriptors are graph invariants that play an important role in chemistry, pharmaceutical sciences, etc. The most famous molecular descriptor is the Wiener index introduced in 1947 <cit.>. The Graovac-Pisanski index is a molecular descriptor that considers symmetries and distances in a graph. It measures how far the vertices of a graph are moved on the average by its automorphisms. The Graovac-Pisanski index was introduced by Graovac and Pisanski in 1991 <cit.> under the name modified Wiener index. However, the name modified Wiener index was later used for different variations of the Wiener index <cit.>. Therefore, we use the name Graovac-Pisanski index as suggested by Ghorbani and Klavžar in <cit.>.Carbon nanotubes are carbon compounds with a cylindrical structure, first observed in 1991<cit.>. The extremely large ratio of length to diameter causes unusual properties of these molecules, which are valuable for nanotechnology, electronics, optics and other fields of materials science and technology. Carbon nanotubes can be open-ended or closed-ended. Open-ended single-walled carbonnanotubes are also called tubulenes.It was shown in <cit.> that the quotient of the Wiener index and the Graovac-Pisanski index is strongly correlated with the topological efficiency for some nanostructures. The topological efficiency was introduced in <cit.> as a tool for the classification of the stability of molecules.For recent studies on the Graovac-Pisanski index of some molecular graphs and nanostructures see also <cit.>.Moreover, the Graovac-Pisanski index of zig-zag nanotubes was computed in <cit.>. We use similar ideas to compute this index for armchair nanotubes, but in some places our computation is more difficult and requires some additional insights.In the present paper we first describe the automorphisms of armchair nanotubes and compute the orbits under the natural action of the automorphism group on the set of vertices of a graph. In the second part, the Graovac-Pisanski index for these nanotubes is computed. For this purpose, different cases according to the number of layers and the width of a nanotube are considered. Final results are then gathered in Table <ref>. § PRELIMINARIES Unless stated otherwise, the graphs considered in this paper are finite and connected. The distance d_G(x,y) between vertices x and y of a graph G is the length of a shortest path between vertices x and y in G. We also write d(x,y) for d_G(x,y). Furthermore, if S ⊆ V(G) and x ∈ V(G), we define d(x,S) = ∑_y ∈ Sd(x,y). The Wiener index of a graph G is defined as W(G) = 1/2∑_u ∈ V(G)∑_v ∈ V(G) d_G(u,v). Moreover, if S ⊆ V(G), then W(S) = 1/2∑_u ∈ S∑_v ∈ S d_G(u,v).An isomorphism of graphs G and H with |E(G)|=|E(H)| is a bijection f between the vertex sets of G and H, f: V(G)→ V(H), such that for any two vertices u and v of G it holds that if u and v are adjacent in G then f(u) and f(v) are adjacent in H. When G and H are the same graph, the function f is called an automorphism of G. The composition of two automorphisms is another automorphism, and the set of automorphisms of a given graph G, under the composition operation, forms a group (G), which is called the automorphism group of the graph G. The Graovac-Pisanski index of a graph G, W(G), is defined asW(G) = |V(G)|/2 |(G)|∑_u ∈ V(G)∑_α∈(G) d_G(u, α(u)). Next, we repeat some important concepts from group theory. If G is a group and X is a set, then a group action ϕ of G on X is a function ϕ :G × X → X that satisfies the following: ϕ(e,x) = x for any x ∈ X (here, e is the neutral element of G) and ϕ(gh,x)=ϕ(g,ϕ(h,x)) for all g,h ∈ G and x ∈ X. The orbit of an element x in X is the set of elements in X to which x can be moved by the elements of G, i.e. the set {ϕ(g,x) | g ∈ G }. If G is a graph and (G) the automorphism group, then ϕ: (G) × V(G) → V(G), defined by ϕ(α,u) = α(u) for any α∈(G), u ∈ V(G), is called the natural action of the group (G) on V(G).It was shown in <cit.> that if V_1, …, V_t are the orbits under the natural action of the group (G) on V(G), thenW(G) = |V(G)| ∑_i=1^t 1/|V_i|W(V_i). We also introduce W'(G) = ∑_i=1^t W(V_i), which is the sum of the Wiener indices of orbits of G. The dihedral group D_n is the group of symmetries of a regular polygon with n sides. Therefore, the group D_n has 2n elements. The cyclic group ℤ_n is a group that is generated by a single element of order n. Given groups G and H, the direct product G × H is defined as follows. The underlying set is the Cartesian product G × H and the binary operation on G × H is defined component-wise: (g_1,h_1)(g_2,h_2)=(g_1g_2,h_1h_2), (g_1,h_1),(g_2,h_2) ∈ G × H. If G and H are groups, then a group isomorphism is a bijective function f: G → H such that for all u and v in G it holds f(uv)=f(u)f(v).Finally, we will formally define open-ended carbon nanotubes, also called tubulenes (see <cit.>). Choose any lattice point in the hexagonal lattice as the origin O. Let a_1 and a_2 be the two basic lattice vectors.Choose a vector OA =na_1+m a_2 such that n and m are two integers and |n|+|m|>1, nm≠ -1. Draw two straight lines L_1 and L_2 passing through O and A perpendicular to O A, respectively. By rolling up the hexagonal strip between L_1 and L_2 and gluing L_1 and L_2 such that A and O superimpose, we can obtain a hexagonal tessellation ℋ𝒯 of the cylinder. L_1 and L_2 indicate the direction of the axis of the cylinder. Using the terminology of graph theory, a tubulene T is defined to be the finite graph induced by all the hexagons of ℋ𝒯 that lie between c_1 and c_2, where c_1 and c_2 are two vertex-disjoint cycles of ℋ𝒯 encircling the axis of the cylinder.The vector OA is called the chiral vector of T andthe cycles c_1 and c_2 are the two open-ends of T. For anytubulene T, if its chiral vector is n a_1 + m a_2, T will be called an (n,m)-type tubulene, see Figure <ref>. If T is a (n,m)-type tubulene where n=m, we call it an armchair tubulene.§ ARMCHAIR TUBULENES AND THEIR AUTOMORPHISMS Let T be an armchair tubulene such that c_1 and c_2 are the shortest possible cycles encircling the axis of the cylinder and such that there is the same number of hexagons in every column of hexagons (see Figure <ref>). If T has n vertical layers of hexagons, each containing exactly p hexagons, then we denote it by AT(n,p). Obviously, n must be an even number. Note that AT(n,p) is a (n/2,n/2)-type tubulene. We always assume that n ≥ 2 and p ≥ 1. Moreover, let C_1 and C_2 be subgraphs of AT(n,p) induced by c_1 and c_2, respectively.Obviously, AT(n,p) has p+1 layers of vertices and every layer has two types of vertices, i.e. type 0 and type 1. In the figures the vertices of type 0 always lie lower than the vertices of type 1. The set of vertices of type k in layer i is denoted by V^k_i. Moreover, let the vertices in V^k_i be denoted as follows: V^k_i = { v^k_i,0, …, v^k_i,n-1}. See Figure <ref> for an example. In this section, we determine the orbits under the natural action of the group (AT(n,p)) on the set V(AT(n,p)). First, one lemma is needed.Let φ: V(C_1) → V(C_i) be an isomorphism between subgraphs C_1 and C_i, where i ∈{ 1,2 }. Then there is exactly one automorphism φ: V(AT(n,p)) → V(AT(n,p)) such that φ(x) = φ(x) for any x ∈ V(C_1).Let φ: V(C_1) → V(C_i) be an isomorphism where i ∈{ 1,2 }. For any x ∈ V(C_1) = V^0_0 ∪ V^1_0 we define φ(x) = φ(x). In the rest of the proof we will define function φ step by step such that every edge will be mapped to an edge and φ will be a bijection. First let x ∈ V^0_1. Then there is exactly one y ∈ V^1_0 such that x and y are adjacent. Since the degree of y is 3, let y_1 and y_2 be the other two neighbours of y in AT(n,p). Obviously, φ(y), φ(y_1), and φ(y_2) are already defined and it holds that φ(y_1) and φ(y_2) are both adjacent to φ(y). Since the degree of φ(y) is 3, we define φ(x) to be the neighbour of φ(y), different from φ(y_1) and φ(y_2). This can be done for any x ∈ V^0_1.Now let x ∈ V^1_1. Then there is exactly one vertex y ∈ V^0_1 such that y is adjacent to x. Let y_1 and y_2 be the other two neighbours of y. It is easy to see that φ(y), φ(y_1), and φ(y_2) are already defined. Also, the degree of φ(y) is 3. Therefore, we define φ(x) to be the neighbour of φ(y), different from φ(y_1) and φ(y_2). This can be done for any x ∈ V^1_1. With the procedure above we have defined function φ on the set of vertices V^0_0 ∪ V^1_0 ∪ V^0_1 ∪ V^1_1 such that for any two adjacent vertices x,y ∈ V^0_0 ∪ V^1_0 ∪ V^0_1 ∪ V^1_1, it holds that φ(x) and φ(y) are also adjacent. Using induction, we can define function φ on the set V(AT(n,p)) such that for any two adjacent vertices x,y ∈ V(AT(n,p)) it holds that φ(x) and φ(y) are adjacent. Since φ is also bijective, it is an automorphism of the graph AT(n,p). It follows from the construction that φ is also unique. Therefore, the proof is complete. Finally, we obtain the orbits under the natural action of the group (AT(n,p)) on the set V(AT(n,p)). The orbits under the natural action of the group (AT(n,p)) on the set V(AT(n,p)) are: 1. if p is oddO^0_i = V^0_i ∪ V^1_n-i,i ∈{ 0, …, p-1/2}, O^1_i = V^1_i ∪ V^0_n-i,i ∈{ 0, …, p-1/2}. 2. if p is even O^0_i = V^0_i ∪ V^1_n-i,i ∈{ 0, …, p-2/2}, O^1_i = V^1_i ∪ V^0_n-i,i ∈{ 0, …, p-2/2}, O_p/2 = V^0_p/2∪ V^1_p/2. It follows from the proof of Lemma <ref> that for any vertex x of type k in layer i, where i ∈{ 0, …, p }, k ∈{ 0,1 }, and any vertex y in layer i of type k or in layer n-i of type 1-k, there is an automorphism that maps x to y. We notice that this also works when p is even and i = p/2, which means that if x ∈ V^0_p/2 and y ∈ V^1_p/2, there is an automorphism that maps x to y. Also, if x is in layer i and y is in layer j, j ≠ i, j ≠ n-i, the distance from x to C_1 or C_2, i.e. min{ d(x,C_1),d(x,C_2) }, can not be the same as the distance from y to C_1 or C_2, i.e. min{ d(y,C_1),d(y,C_2) }. Therefore, there is no automorphism that maps x to y. Moreover, if x ∈ V^0_i andy ∈ V^1_i ory ∈ V^0_n-i, where i ∈{ 0, …, p }, i ≠p/2, then the numbers min{ d(x,C_1),d(x,C_2) } and min{ d(y,C_1),d(y,C_2) } can not be the same. Again, there is no automorphism that maps x to y. Therefore, the proof is complete. Lemma <ref> claims that any isomorphism between subgraphs C_1 and C_i, where i ∈{ 1,2 }, can be extended to the automorphism of the graph AT(n,p). In the next proposition we show the other direction.Let φ: V(AT(n,p)) → V(AT(n,p)) be an automorphism. Then the function φ': V(C_1) →φ(V(C_1)), φ'(x) =φ(x) for x ∈ V(C_1), defines an automorphism of C_1 or an isomorphism from C_1 to C_2. The graph AT(n,p) contains exactly two disjoint cycles of length 2n with exactly n vertices of degree 2 in the graph AT(n,p). These two are C_1 and C_2. Therefore, automorphism φ maps C_1 to either C_1 or C_2 and the proof is complete. Hence, we obtain that all the automorphisms of graph AT(n,p) can be obtained by finding all the automorphisms of subgraph C_1 and all the isomorphisms from subgraph C_1 to subgraph C_2. It is easy to see that the automorphism group of subgraph C_1 is isomorphic to the dihedral group D_n/2. Moreover, any isomorphism from C_1 to C_2 can be obtained as the composition of an automorphism of subgraph C_1 and a fixed isomorphism from C_1 to C_2. Therefore, we state the following conjecture. Let AT(n,p) be an armchair tubulene. The automorphism group of the graph AT(n,p) is isomorphic to the direct product of the dihedral group D_n/2 and the cyclic group ℤ_2. § THE GRAOVAC-PISANSKI INDEX OF ARMCHAIR TUBULENES In this section, we calculate the Graovac-Pisanski index of armchair tubulenes. We have to consider the following four cases. The first part is explained in details, while for the remaining cases only the important results are given. We always denote by u an arbitrary element of V^0_0 and by v an arbitrary element of V^1_0.* p is even and 4 | n It is enough to compute W(O^0_0) and W(O^1_0) since, for example, W(O^0_1) of the graph AT(n,p) is exactly W(O^0_0) of the graph AT(n,p-2) (the graph AT(n,p-2) is a convex subgraph of the graph AT(n,p)). Beside that, we need to calculate W(O_p/2). Since the graph induced on the vertices in O_p/2 is an isometric cycle of length 2n, we have W(O_p/2)=n^3. Next, we need to calculate d(u,V^0_0) and therefore, we consider distances between some vertices on the cycle of length 2n, see Figure <ref>. Note that the thick vertices represent the vertices in set V^0_0. Therefore, d(u,V^0_0) =∑_i=0^n/4-1(3+4i) + ∑_i=0^n/4-1(4+4i) + ∑_i=0^n/4-1(1+4i) + ∑_i=0^n/4-2(4+4i)=n^2/2. Obviously, d(v,V^1_0) = d(u,V^0_0) = n^2/2. To determine d(u,V^1_p) and W(O^0_0), we consider two cases. * n ≤ 4p + 4 In this case, we can draw two lines a and b, see Figure <ref>. All n vertices of V^1_p are between lines a and b or near lines a and b (at most 4 vertices).It is easy to observe that a shortest path from vertex u to some vertex x ∈ V^1_p can be obtained by joining a path following line a or line b and a vertical path. Therefore, the distance from u to the vertex directly above u equals 2p+1 and the distance increases by 1 for every next vertex in V^1_p (in both directions). For an example see Figure <ref>. Hence, we getd(u,V^1_p) = (2p+1) + 2 ∑_i=1^n-2/2(2p+1+i) + (2p+1 + n/2)=n^2/4 + 2np + n. Therefore,d(u,O^0_0) = d(u,V^0_0) + d(u,V^1_p) = 3n^2/4 + 2np + nand since every vertex in O^0_0 has equivalent position, we deduceW(O^0_0) =1/2· |O^0_0| · d(u,O^0_0) =2n/2(3n^2/4 + 2np + n)= n(3n^2/4 + 2np + n).* n > 4p + 4In this case, we also draw two lines a and b as before. There are exactly 4p vertices of V^1_p between lines a and b, exactly 4 vertices (2 on each side) of V^1_pnear lines a and b, and n - 4p - 4 other vertices. See Figure <ref>.We can notice that the distance from u to the vertex directly above u is 2p+1 and that the distance from u increases by 1 (in both directions) for every next vertex among other 4p+3 vertices that are between or near lines a and b. Afterwards, for the rest n-4p-4 vertices the increase of the distance from u alternates between 3 and 1 in both directions. Therefore, we get d(u,V^1_p) = (2p+1) + 2 ∑_i=1^4p+2/2(2p+1+i) + (2p+1 + 2p+2)+∑_i=0^n-4p-8/4(4p+5 + 4i) + ∑_i=0^n-4p-8/4(4p+6 + 4i) +∑_i=0^n-4p-8/4(4p+6 + 4i) + ∑_i=0^n-4p-8/4(4p+7 + 4i)=n^2/2 + p(4p+4). Consequently,d(u,O^0_0) = d(u,V^0_0) + d(u,V^1_p) =n^2 + 4p^2 + 4pand since every vertex in O^0_0 has equivalent position, we obtainW(O^0_0) =1/2· |O^0_0| · d(u,O^0_0) =2n/2(n^2 + 4p^2 + 4p)= n(n^2 + 4p^2 + 4p). To compute W(O^1_0), we also consider two cases. * n ≤ 4p Similar as before, we can draw two lines a and b as shown in Figure <ref>. All n vertices of V^0_p are between lines a and b or near the lines a and b (at most 4 vertices).It is easy to observe that the distance from vertex v to the vertex directly above v equals 2p-1 and that the distance increases by 1 for every next vertex in V^0_p (in both directions). For an example see Figure <ref>. Hence, we getd(v,V^0_p) = (2p-1) + 2 ∑_i=1^n-2/2(2p-1+i) + (2p-1 + n/2)=n^2/4 + 2np - n. Therefore,d(v,O^1_0) = d(v,V^1_0) + d(v,V^0_p) = 3n^2/4 + 2np - nand since every vertex in O^1_0 has equivalent position, we deduceW(O^1_0) =1/2· |O^1_0| · d(v,O^1_0) =2n/2(3n^2/4 + 2np - n)= n(3n^2/4 + 2np - n).* n > 4p In this case, we also draw two lines a and b as in the previous case. There are exactly 4p-4 vertices of V^0_p between lines a and b, exactly 4 vertices (2 on each side) of V^0_pnear lines a and b, and n - 4p other vertices. We can notice that the distance from v to the vertex directly above v is 2p-1 and that the distance from v increases by 1 (in both directions) for every next vertex among other 4p-1 vertices that are between or near lines a and b. Afterwards, for the rest n-4p vertices the increase of the distance from v alternates between 3 and 1 in both directions. Therefore, we get d(v,V^0_p) = (2p-1) + 2 ∑_i=1^4p-2/2(2p-1+i) + (2p-1 + 2p)+∑_i=0^n-4p-4/4(4p+1 + 4i) + ∑_i=0^n-4p-4/4(4p+2 + 4i) +∑_i=0^n-4p-4/4(4p+2 + 4i) + ∑_i=0^n-4p-4/4(4p+3 + 4i)=n^2/2 + p(4p-4). Consequently,d(v,O^1_0) = d(v,V^1_0) + d(v,V^0_p) =n^2 + 4p^2 - 4pand since every vertex in O^1_0 has equivalent position, we getW(O^1_0) =1/2· |O^1_0| · d(v,O^1_0) =2n/2(n^2 + 4p^2 - 4p)= n(n^2 + 4p^2 - 4p).Putting all the results together, we obtain Table <ref>.To compute W(AT(n,p)), we use Formula <ref>. First define the following functions:[ f_1(p)= n(3n^2/4 + 2np + n),; f_2(p)=n(n^2 + 4p^2 + 4p),; g_1(p)= n(3n^2/4 + 2np - n),; g_2(p)=n(n^2 + 4p^2 - 4p).;] One can easily notice that W(O^0_i)=f_1(p-2i) if n ≤ 4(p-2i)+4 = 4p-8i+4 and W(O^0_i)=f_2(p-2i) if n > 4p-8i+4 (and similar can be done for W(O^1_i)). Now consider the following four cases.(a) n > 4p+4 It followsW'(AT(n,p))=W(O_p/2) + ∑_i=1^p/2f_2(2i) + ∑_i=1^p/2g_2(2i).(b) n = 4p+4 For p ≥ 4 it followsW'(AT(n,p))= W(O_p/2) + ∑_i=1^p-2/2f_2(2i) + f_1(p) + ∑_i=1^p/2g_2(2i).The case p=2 can be easily computed in a similar way. (c) n ≤ 4p and 8 | n For n ≥ 16 it followsW'(AT(n,p))= W(O_p/2) + ∑_i=1^n-8/8f_2(2i) + ∑_i=n/8^p/2f_1(2i)+∑_i=1^n-8/8g_2(2i) + ∑_i=n/8^p/2g_1(2i).The case n=8 can be easily computed in a similar way. (d) n ≤ 4p and 8 | (n-4) For n≥ 20 it followsW'(AT(n,p))= W(O_p/2) +∑_i=1^n-12/8f_2(2i) + ∑_i=n-4/8^p/2f_1(2i)+∑_i=1^n-4/8g_2(2i) + ∑_i=n+4/8^p/2g_1(2i).The cases n=12 or n=4 can be easily computed in a similar way. To compute all the sums from the previous cases, we use a computer program. Since |V(AT(n,p))| = 2n(p+1) and the cardinality of any orbit of AT(n,p) is 2n, it is easy to see that W(AT(n,P)) = (p+1)W'(AT(n,P)). The results are presented in the first part of Table <ref>.* p is even and 4 | (n-2) All the details are similar to the case 1. Therefore, the important results are presented in Table <ref>. We also have W(O_p/2)=n^3. The values of the Graovac-Pisanski index in this case are shown in the second part of Table <ref>. * p is odd and 4 | n All the details are similar to the case 1. It turns out that the distances are the same as for even p. Therefore, we can considerTable <ref>. The values of the Graovac-Pisanski index in this case are shown in the third part of Table <ref>. * p is odd and 4 | (n-2) All the details are similar to the case 1. As above it turns out that the distances are the same as for even p. Therefore, we can considerTable <ref>. The values of the Graovac-Pisanski index in this case are shown in the last part of Table <ref>.Finally, the results for the Graovac-Pisanski index of AT(n,p) are shown in Table <ref>. The results for some small cases are omitted.§ ACKNOWLEDGMENTThe author Petra Žigert Pleteršek acknowledge the financial support from the Slovenian Research Agency (research core funding No. P1-0297). The author Niko Tratnik was financially supported by the Slovenian Research Agency. ashrafi_diu A. R. Ashrafi, M. V. Diudea (Eds.), Distance, symmetry, and topology in carbon nanomaterials, Springer International Publishing, Switzerland, 2016.ashrafi_koo_diu A. R. Ashrafi, F. Koorepazan-Moftakhar, M. V. Diudea, Topological symmetry of nanostructures, Fuller. Nanotub. Car. N. 23 (2015) 989–1000.ashrafi_koo_diu1 A. R. Ashrafi, F. Koorepazan-Moftakhar, M. V. Diudea, O. Ori, Graovac-Pisanski index of fullerenes and fullerene-like molecules, Fuller. Nanotub. Car. N. 24 (2016) 779–785.ashrafi_sha A. R. Ashrafi, H. Shabani, The modified Wiener index of some graph operations, Ars. Math. Contemp. 11 (2016) 277–284. cataldo F. Cataldo, O. Ori, S. Iglesias-Groth, Topological lattice descriptors of graphene sheets with fullerene-like nanostructures, Mol. Sim. 36 (2010) 341–353. ghorbani M. Ghorbani, S. Klavžar, Modified Wiener index via canonical metric representation, and some fullerene patches, Ars. Math. Contemp. 11 (2016) 247–254.graovac A. Graovac, T. Pisanski, On the Wiener index of a graph, J. Math. Chem. 8 (1991) 53–62.gu-vu I. Gutman, D. Vukičević, J. Žerovnik, A class of modified Wiener indices, Croat. Chem. Acta 77 (2004) 103–109.ii S. Iijima, Helical microtubules of graphitic carbon, Nature 354 (1991) 56–58.koo_ashrafi3 F. Koorepazan-Moftakhar, A. R. Ashrafi, Combination of distance and symmetry in some molecular graphs, Appl. Math. Comput. 281 (2016) 223–232.koo_ashrafi F. Koorepazan-Moftakhar, A. R. Ashrafi, Distance under symmetry, MATCH Commun. Math. Comput. Chem. 74 (2015) 259–272.koo_ashrafi2 F. Koorepazan-Moftakhar, A. R. Ashrafi, Z. Mehranian, Symmetry and PI polynomials of C_50+10n fullerenes, MATCH Commun. Math. Comput. Chem. 71 (2014) 425–436.li-li M. Liu, B. Liu, A survey on recent results of variable Wiener index, MATCH Commun. Math. Comput. Chem. 69 (2013) 491–520.ni-tr S. Nikolić, N. Trinajstić, M. Randić, Wiener index revisited, Chem. Phys. Lett. 333 (2001) 319–321.ori O. Ori, F. Cataldo, A. Graovac, Topological ranking of C_28 fullerenes reactivity, Fuller. Nanotub. Car. N. 17 (2009) 308–323.sa H. Sachs, P. Hansen, M. Zheng, Kekulé count in tubular hydrocarbons, MATCH Commun. Math. Comput. Chem. 33 (1996) 169–241.sha_ashrafi H. Shabani, A. R. Ashrafi, Symmetry–moderated Wiener index, MATCH Commun. Math. Comput. Chem. 76 (2016) 3–18.tratnik N. Tratnik, The Graovac-Pisanski index of zig-zag tubulenes and the generalized cut method, J. Math. Chem. (2017) doi:10.1007/s10910-017-0749-5.Wiener H. Wiener, Structural determination of paraffin boiling points, J. Amer. Chem. Soc. 69 (1947) 17–20.
http://arxiv.org/abs/1704.08474v1
{ "authors": [ "Niko Tratnik", "Petra Žigert Pleteršek" ], "categories": [ "math.CO", "92E10, 05C12, 05C25, 05C90" ], "primary_category": "math.CO", "published": "20170427083519", "title": "The Graovac-Pisanski Index of Armchair Nanotubes" }
-0.5cm 1changemargin[2] #1 #2*plain theoremTheorem proposition[theorem]Proposition corollary[theorem]Corollary lemma[theorem]Lemma fact[theorem]Factdefinition definition[theorem]Definition example[theorem]Example examples[theorem]Examples remark[theorem]Remark remarks[theorem]Remarks question[theorem]Question conjecture[theorem]Conjecturetheoremsection equationsection Department of Mathematics, The University of Texas at Austin, 1 University Station C1200, Austin, TX 78712, USA [email protected] CNRS and IRMA, Université de Strasbourg, 7 rue René Descartes, 67084 Strasbourg Cedex, France [email protected] CNRS and Laboratoire Alexander Grothendieck, Institut des Hautes Études Scientifiques, Université Paris-Saclay, 35 route de Chartres, 91440 Bures-sur-Yvette, France [email protected] project received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (ERC starting grant DiGGeS, grant agreement No 715982). The authors also acknowledge support from the GEAR Network, funded by the National Science Foundation under grants DMS 1107452, 1107263, and 1107367 (“RNMS: GEometric structures And Representation varieties”). J.D. was partially supported by an Alfred P. Sloan Foundation fellowship, and by the National Science Foundation under grant DMS 1510254. F.G. and F.K. were partially supported by the Agence Nationale de la Recherche through the Labex CEMPI (ANR-11-LABX-0007-01) and the grant DynGeo (ANR-16-CE40-0025-01). Part of this work was completed while F.K. was in residence at the MSRI in Berkeley, California, for the program Geometric Group Theory (Fall 2016) supported by NSF grant DMS 1440140, and at the INI in Cambridge, UK, for the program Nonpositive curvature group actions and cohomology (Spring 2017) supported by EPSRC grant EP/K032208/1.We study a notion of convex cocompactness for discrete subgroups of the projective general linear group acting (not necessarily irreducibly) on real projective space, and give various characterizations. A convex cocompact group in this sense need not be word hyperbolic, but we show that it still has some of the good properties of classical convex cocompact subgroups in rank-one Lie groups. Extending our earlier work <cit.> from the context of projective orthogonal groups, we show that for word hyperbolic groups preserving a properly convex open set in projective space, the above general notion of convex cocompactness is equivalent to a stronger convex cocompactness condition studied by Crampon–Marquis, and also to the condition that the natural inclusion be a projective Anosov representation. We investigate examples. Convex cocompact actions in real projective geometry Fanny Kassel ==================================================== § INTRODUCTION In the classical setting of semisimple Lie groups G of real rank one, a discrete subgroup of G is said to be convex cocompact if it acts cocompactly on some nonempty closed convex subset of the Riemannian symmetric space G/K of G. Such subgroups have been abundantly studied, in particular in the context of Kleinian groups and real hyperbolic geometry, where there is a rich world of examples. They are known to display good geometric and dynamical behavior.On the other hand, in higher-rank semisimple Lie groups G, the condition that a discrete subgroup Γ act cocompactly on some nonempty convex subset of the Riemannian symmetric space G/K turns out to be quite restrictive: Kleiner–Leeb <cit.> and Quint <cit.> proved, for example, that if G is simple and such a subgroup Γ is Zariski-dense in G, then it is in fact a uniform lattice of G.The notion of an Anosov representation of a word hyperbolic group in a higher-rank semisimple Lie group G, introduced by Labourie <cit.> and generalized by Guichard–Wienhard <cit.>, is a much more flexible notion, which has earned a central role in higher Teichmüller–Thurston theory, see <cit.>. Anosov representations are defined, not in terms of convex subsets of the Riemannian symmetric space G/K, but rather in terms of a dynamical condition for the action on a certain flag variety, on a compact homogeneous space G/P. This dynamical condition guarantees many desirable analogies with convex cocompact subgroups in rank one: see <cit.>. It also allows for the definition of certain interesting geometric structures associated to Anosov representations: see <cit.>. However, natural convex geometric structures associated to Anosov representations have been lacking in general. Such structures could allow geometric intuition to bear more fully on Anosov representations, making them more accessible through familiar geometric constructions such as convex fundamental domains, and potentially unlocking new sources of examples. While there is a rich supply of examples of Anosov representations into higher-rank Lie groups in the case of surface groups or free groups, it has proven difficult to construct examples for more complicated word hyperbolic groups.One of the goals of this paper is to show that, when G = (^n) is a projective linear group, there are, in many cases, natural convex cocompact geometric structures modeled on (^n) associated to Anosov representations into G. The idea is the following: to any discrete subgroup Γ of G = (^n) are associated two limit sets Λ_Γ and Λ_Γ^*. Recall that an element g∈(^n) is said to be proximal in the real projective space (^n) if it admits a unique attracting fixed point in (^n) (see Section <ref>). The proximal limit set Λ_Γ of Γ in (^n) is defined as the closure of the set of attracting fixed points of proximal elements of Γ in (^n). Similarly, we consider the proximal limit set Λ_Γ^* of Γ in the dual projective space ((^n)^*) for the dual action; we can view it as a set of projective hyperplanes in (^n). Suppose the complement (^n) ∖⋃_H∈Λ_Γ^* H is nonempty. Its connected components are open sets which (as soon as Λ_Γ^*≠⁠∅) are convex, in the sense that they are contained and convex in some affine chart of (^n); when Γ acts irreducibly on (^n), these components are even properly convex, in the sense that they are convex and bounded in some affine chart. If one of these convex open sets, call it Ω_max, is properly convex and invariant under Γ, then the action of Γ on Ω_max is necessarily properly discontinuous (see Section <ref>), the set Λ_Γ is contained in the boundary ∂Ω_max, and it makes sense to consider the convex hull of Λ_Γ in Ω_max: this is a closed convex subsetof Ω_max. When Γ↪ G is (projective) Anosov, we prove that the action of Γ on the convex set  is cocompact. Further, we prove that Γ satisfies a stronger notion of projective convex cocompactness introduced by Crampon–Marquis <cit.>.Conversely, we show that convex cocompact subgroups of (^n) in the sense of <cit.> always give rise to Anosov representations, which enables us to give new examples of Anosov representations and study their deformation spaces by constructing these geometric structures directly. In <cit.> we had previously established this close connection between convex cocompactness in projective space and Anosov representations in the case of irreducible representations valued in a projective orthogonal group (p,q).One context where such a connection between Anosov representations and convex projective structures has been known for some time is the deformation theory of real projective surfaces, for G = (^3) <cit.>. More generally, it follows from work of Benoist <cit.> that if a discrete subgroup Γ of G=(^n) divides (acts cocompactly on) a strictly convex open subset Ω of (^n), then Γ is word hyperbolic and the natural inclusion Γ↪ G is Anosov. In this particular case Λ_Γ is the boundary of Ω and Λ_Γ^* is the collection of supporting hyperplanes to Ω.Benoist <cit.> also found examples of discrete subgroups of (^n), acting irreducibly on (^n), which divide properly convex open sets that are not strictly convex, for 4≤ n≤ 7; these subgroups are not word hyperbolic. In this paper we study a broad notion of convex cocompactness for discrete subgroups Γ of (^n) acting on (^n) which simultaneously generalizes Crampon–Marquis's notion and Benoist's divisible convex sets <cit.>. While we mainly take the point of view of examining limit sets and convex hulls in projective space, we also show that this notion of convex cocompactness is characterized by the property that Γ is (up to finite index) the holonomy group of a compact convex projective manifold with strictly convex boundary. Cooper–Long–Tillmann <cit.> have studied the deformation theory of such manifolds and their work implies that this notion is stable under small deformations of Γ in (^n). We show further that it is stable under deformation into larger projective general linear groups (^n+n'), after the model of quasi-Fuchsian deformations of Fuchsian groups. When nontrivial deformations exist, this yields examples of nonhyperbolic discrete subgroups which satisfy our convex cocompactness property but do not divide a properly convex open set.We now describe our results in more detail. §.§ Strong convex cocompactness in (V) and Anosov representationsIn the whole paper, we fix an integer n≥ 2 and set V := ^n. Recall that an open subset Ω of the projective space (V) is called convex if it is contained and convex in some affine chart, properly convex if its closure is convex, and strictly convex if in addition its boundary does not contain any nontrivial projective line segment. It is said to have boundary of classC^1 (or just C^1 boundary) if every point of the boundary of Ω has a unique supporting hyperplane.In <cit.>, Crampon–Marquis introduced a notion of geometrically finite action of a discrete subgroup Γ of (V) on a strictly convex open domain of (V) with boundary of class C^1. If cusps are not allowed (or equivalently, if we request all infinite-order elements to be proximal), this notion reduces to a natural notion of convex cocompact action on such domains. We will call discrete groups Γ with such actions strongly convex cocompact. Let Γ⊂(V) be an infinite discrete subgroup.* Let Ω be a Γ-invariant properly convex open subset of (V). The action of Γ on Ω is strongly convex cocompact if Ω is strictly convex with boundary of class C^1 and for some x ∈Ω, the convex hull in Ω of the orbital limit set Γ· x∩∂Ω is nonempty and has compact quotient by Γ. * The group Γ is strongly convex cocompact in (V) if it admits a strongly convex cocompact action on some properly convex open subset Ω of (V).In the definition above, the convex hull in Ω of the orbital limit set Γ· x∩∂Ω is defined as the intersection of Ω with the convex hull of Γ· x∩∂Ω in the convex set Ω. Moreover, Γ· x∩∂Ω is independent of the choice of x ∈Ω since Ω is strictly convex, see Lemma <ref>. The following observation is an easy consequence.The action of Γ on Ω is strongly convex cocompact in the sense of Definition <ref> if and only if Ω is strictly convex with boundary of class C^1 and some nonempty closed convex subset of Ω has compact quotient by Γ. For V=^p,1 with p≥ 2, any discrete subgroup of Isom(^p)=(p,1)⊂(V) which is convex cocompact in the usual sense is strongly convex cocompact in (V), taking Ω = { [v] ∈(^p+1)  | ⟨ v,v⟩_p,1 < 0} to be the projective model of ^p (where ⟨·,·⟩_p,1 is a symmetric bilinear form of signature (p,1) on ^p+1). The first main result of this paper is a close connection between strong convex cocompactness in (V) and Anosov representations into (V). Let P_1 (P_n-1) be the stabilizer in G=(V) of a line (hyperplane) of V=^n; it is a maximal proper parabolic subgroup of G, and G/P_1 (G/P_n-1) identifies with (V) (with the dual projective space (V^*)). We shall think of (V^*) as the space of projective hyperplanes in (V). Let Γ be a word hyperbolic group, with Gromov boundary ∂_∞Γ. A P_1-Anosov representation (sometimes also called a projective Anosov representation) of Γ into G is a representation ρ : Γ→ G for which there exist two continuous, ρ-equivariant boundary maps ξ : ∂_∞Γ→(V) and ξ^* : ∂_∞Γ→(V^*) which *are compatible, ξ(η)∈ξ^*(η) for all η∈∂_∞Γ,*are transverse, ξ(η)∉ξ^*(η') for all η≠η' in ∂_∞Γ,*have an associated flow with some uniform contraction/expansion property described in <cit.>.We do not state condition <ref> precisely, since we will use in place of it a simple condition on eigenvalues or singular values described in Definition <ref> and Fact <ref> below, taken from <cit.>. A consequence of <ref> is that every infinite-order element of ρ(Γ) is proximal in (V) and in (V^*), and that the image ξ(∂_∞Γ) (ξ^*(∂_∞Γ)) of the boundary map is the proximal limit set Λ_Γ (Λ_Γ^*) of ρ(Γ) in (V) ((V^*)). By <cit.>, if ρ is irreducible then condition <ref> is automatically satisfied as soon as <ref> and <ref> are, but this is not true in general: see <cit.>.It is well known (see <cit.>) that a discrete subgroup of (p,1) is convex cocompact in the classical sense if and only if it is word hyperbolic and the natural inclusion Γ↪(p,1) ↪(^p+1) is P_1-Anosov. In this paper, we prove the following higher-rank generalization, where Λ_Γ^* denotes the proximal limit set of Γ in (V^*), viewed as a set of projective hyperplanes in (V).Let Γ be an infinite discrete subgroup of G=(V). Suppose the set (V) ∖⋃_H∈Λ_Γ^* H admits a Γ-invariant connected component (this is always the case if Γ preserves a nonempty properly convex open subset of (V), see Proposition <ref>). Then the following are equivalent: *Γ is strongly convex cocompact in (V);*Γ is word hyperbolic and the natural inclusion Γ↪ G is P_1-Anosov.As mentioned above, for Γ acting cocompactly on a strictly convex open set (which is then a divisible strictly convex set), the implication (<ref>) ⇒ (<ref>) follows from work of Benoist <cit.>.* The fact that a strongly convex cocompact group is word hyperbolic is due to Crampon–Marquis <cit.>. *In the case where Γ acts irreducibly on (V) (does not preserve any nontrivial projective subspace of (V)) and is contained in (p,q) ⊂(V) for some p,q∈^* with p+q=n=(V), Theorem <ref> was first proved in our earlier work <cit.>. In that case we actually gave a more precise version of Theorem <ref> involving the notion of negative/positive proximal limit set: see Section <ref> below. Our proof of Theorem <ref> in the present paper uses many of the ideas of <cit.>. One main improvement here is the treatment of duality in the general case where there is no nonzero Γ-invariant quadratic form. *In independent and simultaneous work, Zimmer <cit.> also extends <cit.> by studying a slightly different notion for actions of discrete subgroups Γ of (V) on properly convex open subsets Ω of (V): by definition <cit.>, a subgroup Γ of Aut(Ω) is regular convex cocompact if it acts cocompactly on some nonempty, Γ-invariant, closed, properly convex subsetof Ω such that every extreme point ofin ∂Ω is a C^1 extreme point of Ω. By <cit.>, if Γ⊂Aut(Ω) is regular convex cocompact and acts irreducibly on (V), then Γ is word hyperbolic and the natural inclusion Γ↪(V) is P_1-Anosov. Conversely, by <cit.>, any irreducible P_1-Anosov representation Γ↪(V) can be composed with an irreducible representation (V)→(V'), for some larger vector space V', so that Γ becomes regular convex cocompact in Aut(Ω') for some Ω'⊂(V'). It follows from Theorem <ref> below that Zimmer's notion of regular convex cocompactness is equivalent to our notion of strong convex cocompactness even in the case that Γ does not act irreducibly on (V) (condition <ref> of Theorem <ref> implies regular convex cocompactness, which implies condition <ref> of Theorem <ref>). * In this paper, unlike in <cit.> or <cit.>, we do not assume Γ to act irreducibly on (V). This makes the notion of Anosov representation slightly more involved (condition <ref> above is not automatic), and also adds some subtleties to the notion of convex cocompactness (see Remarks <ref> and <ref>). We note that there exist strongly convex cocompact groups in (V) which do not act irreducibly on (V), and whose Zariski closure is not even reductive (which means that there is an invariant projective subspace (W) of (V), but no complementary subspace W' of W such that (W') is invariant): see Section <ref>. This contrasts with the case of divisible convex sets described in <cit.>.For n=(V)≥ 3, there exist P_1-Anosov representations ρ : Γ→ G=(V) which do not preserve any nonempty properly convex subset of (V): see <cit.>. However, by <cit.>, if ∂_∞Γ is connected, then any P_1-Anosov representation ρ valued in (p,q)⊂(^p+q) preserves a nonempty properly convex open subset of (^p+q), hence ρ(Γ) is strongly convex cocompact in (^p+q) by Theorem <ref>. Extending this, <cit.> gives sufficient group-theoretic conditions on Γ for P_1-Anosov representations Γ↪(V) to be regular convex cocompact in Aut(Ω) (in the sense of Remark <ref>.<ref>) for some Ω⊂(V).§.§ Convex projective structures for Anosov representations We can apply Theorem <ref> to show that some well-known families of Anosov representations, such as Hitchin representations in odd dimension, naturally give rise to convex cocompact real projective manifolds.Let Γ be a closed surface group of genus ≥ 2 and ρ : Γ→(^n) a Hitchin representation. *If n is odd, then ρ(Γ) is strongly convex cocompact in (^n).*If n is even, then ρ(Γ) is not strongly convex cocompact in (^n); in fact it does not even preserve any nonempty properly convex subset of (^n).For statement (<ref>), see also <cit.>. This extends <cit.>, about Hitchin representations valued in (k+1,k)⊂(^2k+1). * The case n=3 of Proposition <ref>.(<ref>) is due to Choi–Goldman <cit.>, who proved that the holonomy representations of the convex projective structures on a given closed hyperbolic surface S are exactly the elements of the Hitchin component of (π_1(S),(^3)). * Guichard–Wienhard <cit.> associated different geometric structures to Hitchin representations into (^n). For even n, their geometric structures are modeled on (^n) but can never be convex (see Proposition <ref>.(<ref>)). For odd n, their geometric structures are modeled on the space ℱ_1,n-1⊂(^n)×((^n)^*) of pairs (ℓ,H) where ℓ is a line of ^n and H a hyperplane containing ℓ; these geometric structures lack a notion of convexity but live on manifolds that are closed, unlike the manifolds ρ(Γ)\Ω of Proposition <ref>.(<ref>) for n>3.We refer to Proposition <ref> for a more general statement on convex structures for connected open sets of Anosov representations. §.§ New examples of Anosov representations We can also use the implication (<ref>) ⇒ (<ref>) of Theorem <ref> to obtain new examples of Anosov representations by constructing explicit strongly convex cocompact groups in (V). Following this strategy, in <cit.> we showed that every word hyperbolic right-angled Coxeter group W admits reflection-group representations into some (V) which are P_1-Anosov. Extending this approach, in <cit.> we give an explicit description of the deformation spaces of such representations, for Coxeter groups that are not necessarily right-angled. §.§ General convex cocompactness in (V)We now discuss generalizations of Definition <ref> where the properly convex open set Ω is not assumed to have any regularity at the boundary. These cover a larger class of groups, not necessarily word hyperbolic.Given Remark <ref>, a naive generalization of Definition <ref> that immediately comes to mind is the following.An infinite discrete subgroup Γ of (V) is naively convex cocompact in (V) if it preserves a properly convex open subset Ω of (V) and acts cocompactly on some nonempty closed convex subsetof Ω. However, the class of naively convex cocompact subgroups of (V) is not stable under small deformations: see Remark <ref>.<ref>. This is linked to the fact that if Γ and Ω are as in Definition <ref> with Ω not strictly convex, then the set of accumulation points of a Γ-orbit of Ω may depend on the orbit (see Example <ref>).To address this issue, we introduce a notion of limit set that does not depend on a choice of orbit. Let Γ⊂(V) be an infinite discrete subgroup and let Ω be a properly convex open subset of (V) invariant under Γ. The full orbital limit set _Ω(Γ) of Γ in Ω is the union of all accumulation points of all Γ-orbits in Ω. The full orbital limit set _Ω(Γ) always contains the proximal limit set Λ_Γ (Lemma <ref>), but may be larger. Using this new limit set, we can introduce another generalization of Definition <ref> which is slightly stronger and has better properties than Definition <ref>: here is the main definition of the paper.Let Γ be an infinite discrete subgroup of (V). * Let Ω be a nonempty Γ-invariant properly convex open subset of (V). The action of Γ on Ω is convex cocompact if the convex hull _Ω(Γ) of the full orbital limit set _Ω(Γ) in Ω is nonempty and has compact quotient by Γ. * Γ is convex cocompact in (V) if it admits a convex cocompact action on some nonempty properly convex open subset Ω of (V).Note that _Ω(Γ) need not be closed in general, hence its convex hull in Ω need not be either. However, if the action of Γ on Ω is convex cocompact in the above sense, then _Ω(Γ) is closed (see Corollary <ref>.(<ref>)). In the setting of Definition <ref>, the set Γ\_Ω(Γ) is a compact convex subset of the projective manifold (or orbifold) Γ\Ω which contains all the topology and which is minimal (Lemma <ref>); we shall call it the convex core of Γ\Ω. By analogy, we shall also call _Ω(Γ) the convex core of Ω for Γ.We shall see (Corollary <ref>.(<ref>)) that if Γ acts cocompactly on some closed convex subsetof Ω containing _Ω(Γ), then the action of Γ on Ω is convex cocompact in the sense of Definition <ref>.If Γ is strongly convex cocompact in (V), then it is convex cocompact in (V). This is immediate from the definitions since when Ω is strictly convex, the full orbital limit set _Ω(Γ) coincides with the accumulation set of a single Γ-orbit of Ω. See Theorem <ref> below for a refinement. If Γ divides (acts cocompactly on) a nonempty properly convex open subset Ω of (V), then _Ω(Γ)=Ω and Γ is convex cocompact in (V) (see Corollary <ref>.(<ref>)). There exist divisible convex sets which are not strictly convex, yielding convex cocompact groups that are not strongly convex cocompact in (V): see Example <ref> for a basic case. As discussed in Section <ref>, an important class of examples is the so-called symmetric ones where Ω is a higher-rank symmetric space. The first other indecomposable examples were constructed by Benoist <cit.> for 4≤(V)≤ 7, and further examples were recently constructed for (V)=4 in <cit.> and for 5≤(V)≤ 7 in <cit.>. In Theorem <ref> we shall give several characterizations of convex cocompactness in (V). In particular, we shall prove that the class of subgroups of (V) that are convex cocompact in (V) is precisely the class of holonomy groups of compact properly convex projective orbifolds with strictly convex boundary; this ensures that this class of subgroups is stable under small deformations, using <cit.>. Before stating this and other results, let us make the connection with the context of Theorem <ref> (strong convex cocompactness). §.§ Word hyperbolic convex cocompact groups in (V)Let us recall the following terminology of Benoist <cit.>.Letbe a properly convex subset of (V). A properly embedded triangle (or PET for short) in  is a nondegenerate planar triangle whose interior is contained in , but whose edges and vertices are contained in := ∖. Let Ω be a properly convex open subset of (V) and Γ a discrete subgroup of (V) preserving Ω. A PET in Ω plays an analogous role to a flat in a Riemannian manifold. In particular, the presence of a PET in Ω obstructs δ-hyperbolicity of the Hilbert metric on Ω (see Section <ref>). Hence by the Švarc–Milnor lemma, a PET in Ω obstructs word hyperbolicity of Γ in the case that Γ divides Ω. In fact, even the presence of a segment in ∂Ω obstructs word hyperbolicity: Benoist <cit.> proved that if Γ divides Ω, then Γ is word hyperbolic if and only if Ω is strictly convex.The direct analogue of Benoist's theorem is not true for convex cocompact actions: for instance, Schottky subgroups of (2,1) act convex cocompactly on properly convex domains Ω⊃^2 that are not strictly convex (see Figure <ref>). However, we show that for Γ acting convex cocompactly on Ω, the hyperbolicity of Γ is determined by the convexity behavior of ∂Ω at the full orbital limit set or, relatedly, by the presence of PETs in the convex core. Here is an expanded version of Theorem <ref>.Let Γ be an infinite discrete subgroup of (V). Then the following are equivalent:*Γ is strongly convex cocompact in (V) (Definition <ref>); *Γ is convex cocompact in (V) (Definition <ref>) and word hyperbolic; *Γ is convex cocompact in (V) and for any properly convex open set Ω⊂(V) on which Γ acts convex cocompactly, the full orbital limit set _Ω(Γ) does not contain any nontrivial projective line segment; *Γ is convex cocompact in (V) and for some nonempty properly convex open set Ω⊂(V) on which Γ acts convex cocompactly, the convex core _Ω(Γ) does not contain a PET; *Γ preserves a properly convex open subset Ω of (V) and acts cocompactly on some closed convex subsetof Ω with nonempty interior such that :=∖=∩∂Ω does not contain any nontrivial projective line segment;*Γ is word hyperbolic, the inclusion Γ↪(V) is P_1-Anosov, and Γ preserves some nonempty properly convex open subset of (V);*Γ is word hyperbolic, the inclusion Γ↪(V) is P_1-Anosov, and the set(V) ∖⋃_H∈Λ_Γ^* H admits a Γ-invariant connected component. When these conditions hold, there is equality between the following four sets: * the orbital limit set _Ω(Γ) of Γ in any Γ-invariant properly convex open subset Ω of (V) which is strictly convex with boundary of class C^1 as in Definition <ref> (condition <ref>);* the full orbital limit set _Ω(Γ) for any properly convex open set Ω on which Γ acts convex cocompactly as in Definition <ref> (conditions <ref>, <ref>, <ref>);* the segment-free setfor any convex subseton which Γ acts cocompactly in a Γ-invariant properly convex open set Ω (condition <ref>);* the image of the boundary map ξ : ∂_∞Γ→(V) of the Anosov representation Γ↪(V) (conditions <ref> and <ref>), which is also the proximal limit set Λ_Γ of Γ in (V) (Definition <ref>). §.§ Properties of convex cocompact groups in (V) We show that, even in the case of nonhyperbolic discrete groups, the notion of convex cocompactness in (V) (Definition <ref>) still has some of the nice properties enjoyed by Anosov representations and convex cocompact subgroups of rank-one Lie groups. In particular, we prove the following.Let Γ be an infinite discrete subgroup of G=(V). *The group Γ is convex cocompact in (V) if and only if it is convex cocompact in (V^*) (for the dual action).*If Γ is convex cocompact in (V), then it is finitely generated and quasi-isometrically embedded in G.*If Γ is convex cocompact in (V), then Γ does not contain any unipotent element.*If Γ is convex cocompact in (V), then there is a neighborhood 𝒰⊂(Γ,G) of the natural inclusion such that any ρ∈𝒰 is injec­tive and discrete with image ρ(Γ) convex cocompact in (V).*Let V'=^n' and let i :⁠^±(V) ↪^±(V ⊕ V') be the natural inclusion acting trivially on the second factor. If Γ is convex cocompact in (V), then i(Γ̂) is convex cocompact in (V ⊕ V'), where Γ̂ is the lift of Γ to ^±(V) that preserves a properly convex cone of V lifting Ω (see Remark <ref>).*If the semisimplification (Definition <ref>) of the natural inclusion Γ↪(V) is injective and its image is convex cocompact in (V), then Γ is convex cocompact in (V).The equivalence <ref> ⇔ <ref> of Theorem <ref> shows that Theorem <ref> still holds if all the occurrences of “convex cocompact” are replaced by “strongly convex cocompact”. While some of the properties of Theorem <ref> are proved directly from Definition <ref>, others will be most naturally established using alternative characterizations of convex cocompactness in (V) (Theorem <ref>). We refer to Propositions <ref> and <ref> for further operations that preserve convex cocompactness, strengthening properties <ref> and <ref>. Properties <ref> and <ref> give a source for many new examples of convex cocompact groups by starting with known examples in (V), including them into (V ⊕ V'), and then deforming. This generalizes the picture of Fuchsian groups in (2,1) being deformed into quasi-Fuchsian groups in (3,1). For instance, we can use Example <ref> to obtain examples of nonhyperbolic discrete subgroups of (^m) (where m=dim(V⊕ V')) which are convex cocompact in (^m) and act irreducibly on (^m), but do not divide any properly convex open subset of (^m). These groups Γ are quasi-isometrically embedded in (^m) and structurally stable (there is a neighborhood of the natural inclusion in (Γ,(^m)) which consists entirely of injective representations) without being word hyperbolic; compare with Sullivan <cit.>. We refer to Section <ref> for more details.§.§ Holonomy groups of convex projective orbifolds We now give alternative characterizations of convex cocompact subgroups in (V). These characterizations are motivated by a familiar picture in rank one: namely, if Ω = ^p is the p-dimensional real hyperbolic space and Γ is a convex cocompact torsion-free subgroup of (p,1) = Isom(^p), then any closed uniform neighborhood _𝗎𝗇𝗂𝖿 of the convex core _Ω(Γ) has strictly convex boundary and the quotient Γ\_𝗎𝗇𝗂𝖿 is a compact hyperbolic manifold with strictly convex boundary. Let us fix some terminology and notation in order to discuss the appropriate generalization of this picture to real projective geometry.Letbe a nonempty convex subset of (V) (not necessarily open nor closed, possibly with empty interior). * The frontier of  is ():=∖().* A supporting hyperplane of  at a point x∈() is a projective hyperplane H such thatx ∈ H ∩ and ∖ H is connected (possibly empty).* The ideal boundary of  is :=∖.* The nonideal boundary of  is := ∖() = ()∖. Note that ifis open, then = () and = ∅; in this case, it is common in the literature to denotesimply by ∂.* The convex set  has strictly convex nonideal boundary if every point x ∈ is an extreme point of . * The convex set  has C^1 nonideal boundary if it has a unique supporting hyperplane at each point x ∈.*The convex set  has bisaturated boundary if for any supporting hyperplane H of , the set H ∩⊂() is either fully contained inor fully contained in .Letbe a properly convex subset of (V) on which the discrete subgroup Γ acts properly discontinuously and cocompactly. To simplify this intuitive discussion, assume Γ torsion-free, so that the quotient M = Γ\ is a compact properly convex projective manifold, possibly with boundary. The group Γ is called the holonomy group of M. We show that if the boundary ∂ M = Γ\ of M is assumed to have some regularity, then Γ is convex cocompact in (V) and, conversely, convex cocompact subgroups in (V) are holonomy groups of properly convex projective manifolds (or more generally orbifolds, if Γ has torsion)whose boundaries are well-behaved. Let Γ be an infinite discrete subgroup of (V). Then the following are equivalent: *Γ is convex cocompact in (V) (Definition <ref>): it acts convex cocompactly on a nonempty properly convex open subset Ω of (V); *Γ acts properly discontinuously and cocompactly on a nonempty properly convex set _𝖻𝗂𝗌𝖺𝗍⊂(V) with bisaturated boundary; *Γ acts properly discontinuously and cocompactly on a nonempty properly convex set _𝗌𝗍𝗋𝗂𝖼𝗍⊂(V) with strictly convex nonideal boundary; *Γ acts properly discontinuously and cocompactly on a nonempty properly convex set _𝗌𝗆𝗈𝗈𝗍𝗁⊂(V) with strictly convex C^1 nonideal boundary.When these conditions hold, _𝖻𝗂𝗌𝖺𝗍, _𝗌𝗍𝗋𝗂𝖼𝗍, _𝗌𝗆𝗈𝗈𝗍𝗁 can be chosen equal, with Ω satisfying _Ω(Γ) = _𝗌𝗆𝗈𝗈𝗍𝗁.Given a properly discontinuous and cocompact action of a group Γ on a properly convex set , Theorem <ref> interprets the convex cocompactness of Γ in terms of the regularity of  at the nonideal boundary , whereas Theorem <ref> (equivalence <ref> ⇔ <ref>) and Theorem <ref> together interpret the strong convex cocompactness of Γ in terms of the regularity of  at bothand . The equivalence (<ref>) ⇔ (<ref>) of Theorem <ref> states that convex cocompact torsion-free subgroups in (V) are precisely the holonomy groups of compact properly convex projective manifolds with strictly convex boundary. Cooper–Long–Tillmann <cit.> studied the deformation theory of such manifolds (allowing in addition certain types of cusps). They established a holonomy principle, which reduces to the following statement in the absence of cusps: the holonomy groups of compact properly convex projective manifolds with strictly convex boundary form an open subset of the representation space of Γ. This result, together with Theorem <ref>, gives the stability property <ref> of Theorem <ref>. For the case of divisible convex sets, see <cit.>.The requirement that _𝖻𝗂𝗌𝖺𝗍 have bisaturated boundary in condition (<ref>) can be seen as a “coarse” version of the strict convexity of the nonideal boundary in (<ref>): the prototype situation is that of a convex cocompact real hyperbolic manifold, with _𝖻𝗂𝗌𝖺𝗍 a closed polyhedral neighborhood of the convex core, see the top right panel of Figure <ref>. Both of these conditions behave well under a natural duality operation generalizing that of open properly convex sets (see Section <ref>). The equivalence (<ref>) ⇔ (<ref>) will be used to prove the duality property <ref> of Theorem <ref>.We note that without some form of strengthened convexity requirement on the boundary, properly convex projective manifolds have a poorly-behaved deformation theory: see Remark <ref>.<ref>.If Γ acts strongly irreducibly on (V) (all finite-index subgroups of Γ act irreducibly) and if the equivalent conditions of Theorem <ref> hold, then _Ω(Γ) = _𝖻𝗂𝗌𝖺𝗍 = _𝗌𝗍𝗋𝗂𝖼𝗍 = _𝗌𝗆𝗈𝗈𝗍𝗁 for any Ω, _𝖻𝗂𝗌𝖺𝗍, _𝗌𝗍𝗋𝗂𝖼𝗍, _𝗌𝗆𝗈𝗈𝗍𝗁 as in conditions (<ref>), (<ref>), (<ref>), and (<ref>) respectively: see Section <ref>. In particular, this set then depends only on Γ. This is not necessarily the case if the action of Γ is not strongly irreducible: see Examples <ref> and <ref>.§.§ Convex cocompactness for subgroups of (p,q)As mentioned in Remark <ref>.<ref>, when Γ⊂(V) acts irreducibly on (V) and is contained in the subgroup (p,q) ⊂(V) of projective linear transformations that preserve a nondegenerate symmetric bilinear form ⟨·, ·⟩_p,q of some signature (p,q) on V=^n, a more precise version of Theorem <ref> was established in <cit.>. Here we remove the irreducibility assumption on Γ.For p,q∈^*, let ^p,q be ^p+q endowed with a symmetric bilinear form ⟨·, ·⟩_p,q of signature (p,q). The spaces^p,q-1 = { [v] ∈(^p,q)  | ⟨ v,v⟩_p,q < 0 } and^p-1,q = { [v] ∈(^p,q)  | ⟨ v,v⟩_p,q >0 }are the projective models for pseudo-Riemannian hyperbolic space of signature (p,q-1), and pseudo-Riemannian spherical space of signature (p-1,q). The geodesics of these two spaces are the (nonempty) intersections of the spaces with the projective lines of (^p+q). The group (p,q) acts transitively on ^p,q-1 and on ^p-1,q. Multiplying the form ⟨·, ·⟩_p,q by -1 produces a form of signature (q,p) and turns (p,q) into (q,p) and ^p-1,q into a copy of ^q,p-1, so we consider only the pseudo-Riemannian hyperbolic spaces ^p,q-1. We denote by ∂^p,q-1 the boundary of ^p,q-1, namely∂^p,q-1 = { [v] ∈(^p,q)  | ⟨ v,v ⟩_p,q = 0 }.We call a subset of ^p,q-1 convex (properly convex) if it is convex (properly convex) as a subset of (^p+q).Since the straight lines of (^p+q) are the geodesics of the pseudo-Riemannian metric on ^p,q-1, convexity in ^p,q-1 is an intrinsic notion.Letbe a closed convex subset of ^p,q-1. * Ifhas nonempty interior, thenis properly convex. Indeed, ifis not properly convex, then it contains a line ℓ of an affine chart ^p+q-1⊃⁠, andis a union of lines parallel to ℓ in that chart. All these lines must be tangent to ∂^p,q-1 at their common endpoint z ∈∂^p,q-1, which implies thatis contained in the hyperplane z^⊥ and has empty interior. * The ideal boundaryis the set of accumulation points ofin ∂^p,q-1. This set contains no projective line segment if and only if it is transverse, y∉ z^⊥ for all y≠ z in . * The nonideal boundary = () ∩^p,q-1 is the boundary ofin ^p,q-1 in the usual sense.*It is easy to see thathas bisaturated boundary if and only if a segment contained in ⊂^p,q-1 never extends to a full geodesic ray of ^p,q-1 — a form of coarse strict convexity for .The following definition was introduced in <cit.>, where it was studied for discrete subgroups Γ acting irreducibly on (^p+q). A discrete subgroup Γ of (p,q) is ^p,q-1-convex cocompact if it acts properly discontinuously with compact quotient on some closed properly convex subsetof ^p,q-1 with nonempty interior whose ideal boundary ⊂∂^p,q-1 does not contain any nontrivial projective line segment. If Γ acts irreducibly on (^p+q), then any nonempty Γ-invariant properly convex subset of ^p,q-1 has nonempty interior, and so the nonempty interior requirement in Definition <ref> may simply be replaced by the requirement thatbe nonempty. See Section <ref> for examples.For a word hyperbolic group Γ, we shall say that a representation ρ : Γ→(p,q) is P_1^p,q-Anosov if it is P_1-Anosov as a representation into (^p+q). We refer to <cit.> for further discussion of this notion.If the natural inclusion Γ↪(p,q) is P_1^p+q-Anosov, then the boundary map ξ takes values in ∂^p,q-1 and the hyperplane-valued boundary map ξ^* satisfies ξ^*(·) = ξ(·)^⊥ where z^⊥ denotes the orthogonal of z with respect to ⟨·, ·⟩_p,q; the image of ξ is the proximal limit set Λ_Γ of Γ in ∂^p,q-1 (Definition <ref> and Remark <ref>). Following <cit.>, we shall say that Λ_Γ is negative (positive) if it lifts to a cone of ^p,q∖{0} on which all inner products ⟨·,·⟩_p,q of noncollinear points are negative (positive); equivalently (see <cit.>), any three distinct points of Λ_Γ span a linear subspace of ^p,q of signature (2,1) ((1,2)). Note that Λ_Γ can be both negative and positive only if |Λ_Γ| = 2, Γ is virtually . With this notation, we prove the following, where Γ is not assumed to act irreducibly on (^p+q); for (<ref>) ⇔ (<ref>) and (2^±) ⇔ (3^±) ⇔ (5^±) ⇒ (4^±) in the irreducible case, see <cit.>.Let p,q∈^* and let Γ be an infinite discrete subgroup of (p,q). Then the following are equivalent: *Γ is convex cocompact in (^p+q) (Definition <ref>); *Γ is strongly convex cocompact in (^p+q) (Definition <ref>); *Γ is ^p,q-1-convex cocompact or ^q,p-1-convex cocompact (after identifying (p,q) with (q,p) as above).The following are also equivalent: *Γ is convex cocompact in (^p+q) and Λ_Γ⊂∂^p,q-1 is negative; *Γ is strongly convex cocompact in (^p+q) and Λ_Γ⊂∂^p,q-1 is negative; *Γ is ^p,q-1-convex cocompact; *Γ acts convex cocompactly on some nonempty properly convex open subset Ω of ^p,q-1; *Γ is word hyperbolic, the natural inclusion Γ↪(p,q) is P_1^p,q-Anosov, and Λ_Γ⊂∂^p,q-1 is negative.Similarly, the following are equivalent: *Γ is convex cocompact in (^p+q) and Λ_Γ⊂∂^p,q-1 is positive; *Γ is strongly convex cocompact in (^p+q) and Λ_Γ⊂∂^p,q-1 is positive; *Γ is ^q,p-1-convex cocompact (after identifying (p,q) with (q,p)); *Γ acts convex cocompactly on some nonempty properly convex open subset Ω of ^q,p-1 (after identifying (p,q) with (q,p)); *Γ is word hyperbolic, the natural inclusion Γ↪(p,q) is P_1^p,q-Anosov, and Λ_Γ⊂∂^p,q-1 is positive.It is not difficult to see that if 𝒯 is an open subset of (Γ,(p,q)) consisting entirely of P_1^p,q-Anosov representations, then the condition that Λ_ρ(Γ)⊂∂^p,q-1 be negative is both an open and closed condition for ρ∈𝒯 <cit.>. Therefore Theorem <ref> and the openness of the set of P_1^p,q-Anosov representations <cit.>implies the following.Let p,q∈^* and let Γ be a discrete group. * The set of representations with finite kernel and ^p,q-1-convex cocompact image is open in (Γ,(p,q)).* Let 𝒯 be a connected open subset of (Γ,(p,q)) consisting entirely of P_1^p,q-Anosov representations. If ρ(Γ) is ^p,q-1-convex cocompact for some ρ∈𝒯, then ρ(Γ) is ^p,q-1-convex cocompact for all ρ∈𝒯.Both statements also hold if “^p,q-1-convex cocompact” is replaced with “^q,p-1-convex cocompact.” It is also not difficult to see that if a closed subset Λ of ∂^p,q-1 is transverse (z^⊥∩Λ={z} for all z∈Λ) and connected, then it is negative or positive <cit.>. Therefore Theorem <ref> implies the following.Let Γ be a word hyperbolic group with connected boundary ∂_∞Γ, and let p,q∈^*. For any P_1^p,q-Anosov representation ρ : Γ→(p,q), the group ρ(Γ) is ^p,q-1-convex cocompact or ^q,p-1-convex cocompact (after identifying (p,q) with (q,p)). In the special case when q=2 (^p,q-1=AdS^p+1 is the Lorentzian anti-de Sitter space) and Γ is the fundamental group of a closed hyperbolic p-manifold, the equivalence <ref> ⇔ <ref> of Theorem <ref> follows from work of Mess <cit.> for p=2 and Barbot–Mérigot <cit.> for p≥ 3. As a consequence of Theorem <ref>, we obtain characterizations of ^p,q-1-convex cocompactness where the assumption on the ideal boundary ⊂∂^p,q-1 in Definition <ref> is replaced by various regularity conditions on the nonideal boundary ⊂^p,q-1. p,q∈^* and let Γ be an infinite discrete subgroup of (p,q). Then the following are equivalent: *Γ is ^p,q-1-convex cocompact: it acts properly discontinuously and cocompactly on a closed convex subsetof ^p,q-1 with nonempty interior whose ideal boundarydoes not contain any nontrivial projective line segment;*Γ acts properly discontinuously and cocompactly on a nonempty closed convex subset _𝖻𝗂𝗌𝖺𝗍 of ^p,q-1 whose boundary _𝖻𝗂𝗌𝖺𝗍 in ^p,q-1 does not contain any infinite geodesic line of ^p,q-1;*Γ acts properly discontinuously and cocompactly on a nonempty closed convex subset _𝗌𝗍𝗋𝗂𝖼𝗍 of ^p,q-1 whose boundary _𝗌𝗍𝗋𝗂𝖼𝗍 in ^p,q-1 is strictly convex;*Γ acts properly discontinuously and cocompactly on a nonempty closed convex set _𝗌𝗆𝗈𝗈𝗍𝗁 of ^p,q-1 whose boundary _𝗌𝗆𝗈𝗈𝗍𝗁 in ^p,q-1 is strictly convex and of class C^1.When these conditions hold, , _𝖻𝗂𝗌𝖺𝗍, _𝗌𝗍𝗋𝗂𝖼𝗍, and _𝗌𝗆𝗈𝗈𝗍𝗁 may be taken equal.§.§ Organization of the paper Section <ref> contains reminders about properly convex domains in projective space, the Cartan decomposition, and Anosov representations. In Section <ref> we establish some general facts about discrete group actions on convex subsets of (V). Section <ref> contains some basic examples and develops the basic theory of naively convex cocompact and convex cocompact subgroups of (V), which will be used throughout the paper.Sections <ref> to <ref> are devoted to the proofs of the main Theorems <ref> and <ref>, which contain Theorem <ref>. More precisely, in Section <ref> we prove the equivalence (<ref>) ⇔ (<ref>) of Theorem <ref>, study a notion of duality, and establish property <ref> of Theorem <ref>. In Section <ref> we prove the equivalences <ref> ⇔ <ref> ⇔ <ref> ⇔ <ref> of Theorem <ref> by studying segments in the limit set.The connection with the Anosov property is made in Sections <ref> (<ref> ⇒ <ref>) and <ref> (<ref> ⇔ <ref> ⇒ <ref>). In Section <ref> we give a smoothing construction to address the remaining implications of Theorems <ref> and <ref>.In Section <ref> we establish properties <ref>–<ref> of Theorem <ref>. In Section <ref> we prove Theorems <ref> and <ref> on ^p,q-1-convex cocompactness. Section <ref> is devoted to more sophisticated examples, including Proposition <ref>. Finally, in Appendix <ref> we collect a few open questions on convex cocompact groups in (V); in Appendix <ref> we give a sharp statement about Hausdorff limits of balls in Hilbert geometry. §.§ Acknowledgements We are grateful to Yves Benoist, Sam Ballas, Gye-Seon Lee, Arielle Leitner, and Ludovic Marquis for helpful and inspiring discussions, and to Thierry Barbot for his encouragement. We thank Daryl Cooper for a useful reference. The first-named author thanks Steve Kerckhoff for discussions related to Example <ref> from several years ago, which proved valuable for understanding the general theory. We warmly thank Pierre-Louis Blayac for useful questions and comments during the revisions of this paper, which contributed in particular to clarifying the discussion on conical limit points. We are grateful to the referee for many valuable comments and suggestions which helped improve the paper. § REMINDERS§.§ Properly convex domains in projective spaceRecall that a subset ofprojective space is called convex if it is contained in and convex in some affine chart, and properly convex if its closure is convex.Let Ω be a properly convex open subset of (V), with boundary ∂Ω. Recall the Hilbert metric d_Ω on Ω:d_Ω(x,y) := 1/2log axybfor all distinct x,y∈Ω, whereis the cross-ratio on ^1(), normalized so that 01t∞=t, and where a,b are the intersection points of ∂Ω with the projective line through x and y, with a,x,y,b in this order. The metric space (Ω,d_Ω) is proper (closed balls are compact) and complete, and the groupAut(Ω) := {g∈(V)  |  g·Ω=Ω}acts on Ω by isometries for d_Ω. As a consequence, any discrete subgroup of Aut(Ω) acts properly discontinuously on Ω.It follows from the definition that if Ω_1 ⊂Ω_2 are nonempty properly convex open subsets of (V), then the corresponding Hilbert metrics satisfy d_Ω_1(x,y) ≥ d_Ω_2(x,y) for all x,y∈Ω_1. Let V^* be the dual vector space of V. By definition, the dual convex set of Ω isΩ^* := ({φ∈ V^*  | φ(v)<0∀ v∈Ω}),where Ω is the closure in V∖{0} of an open convex cone of V lifting Ω. The set Ω^* is a properly convex open subset of (V^*), which is preserved by the dual action of Aut(Ω) on (V^*).Straight lines (contained in projective lines) are always geodesics for the Hilbert metric d_Ω. When Ω is not strictly convex, there may be other geodesics as well. However, a biinfinite geodesic of (Ω,d_Ω) always has well-defined, distinct endpoints in ∂Ω, see <cit.>. §.§ Cartan decompositionThe group Ĝ=(V) admits the Cartan decomposition Ĝ=K̂exp(𝔞̂^+)K̂ where K̂=(n) and𝔞̂^+ := {diag(t_1,…,t_n)  |  t_1≥…≥ t_n}.This means that any ĝ∈Ĝ may be written ĝ = k̂_1exp(â)k̂_2 for some k̂_1,k̂_2∈K̂ and a unique â=diag(t_1,…,t_n)∈𝔞̂^+; for any 1≤ i≤ n, the real number t_i is the logarithm μ_i(ĝ) of the i-th largest singular value of ĝ. This induces a Cartan decomposition G=Kexp(𝔞^+)K with K=(n) and 𝔞^+=𝔞̂^+/, and for any 1≤ i<j≤ n a mapμ_i - μ_j : G=(V) ⟶^+.If ‖·‖_ V denotes the operator norm associated with the standard Euclidean norm on V=^n invariant under K̂=(n), then for any g∈ G with lift ĝ∈Ĝ we have(μ_1 - μ_n)(g) = log(‖ĝ‖_ V ‖ĝ^-1‖_ V).§.§ Proximality in projective spaceWe shall use the following classical terminology.An element g∈(V) is proximal in (V) ((V^*)) if it admits a unique attracting fixed point in (V) ((V^*)). Equivalently, any lift ĝ∈(V) of g has a unique complex eigenvalue of maximal (minimal) modulus, with multiplicity 1. This eigenvalue is necessarily real. For any ĝ∈(V), we denote by λ_1(ĝ)≥λ_2(ĝ)≥…≥λ_n(ĝ) the logarithms of the moduli of the complex eigenvalues of ĝ. For any 1≤ i<j≤ n, this induces a functionλ_i - λ_j : (V) ⟶^+.Thus, an element g∈(V) is proximal in (V) ((V^*)) if and only if(λ_1-⁠λ_2)(g)>⁠ 0 ((λ_n-1-λ_n)(g)>0). We shall use the following terminology.Let Γ be a discrete subgroup of (V). The proximal limit set of Γ in (V) is the closure Λ_Γ of the set of attracting fixed points of elements of Γ which are proximal in (V). When Γ is a discrete subgroup of (V) acting irreducibly on (V) and containing at least one proximal element, the proximal limit set Λ_Γ was first studied in <cit.>. In that setting, the action of Γ on Λ_Γ is minimal (any orbit is dense), and Λ_Γ is contained in any nonempty, closed, Γ-invariant subset of (V), by <cit.>.§.§ Anosov representationsLet P_1 (P_n-1) be the stabilizer in G=(V) of a line (hyperplane) of V; it is a maximal proper parabolic subgroup of G, and G/P_1 (G/P_n-1) identifies with (V) (with the dual projective space (V^*)). As in the introduction, we shall think of (V^*) as the space of projective hyperplanes in (V). The following is not the original definition from <cit.>, but an equivalent characterization taken from <cit.>.Let Γ be a word hyperbolic group. A representation ρ : Γ→ G=(V) is P_1-Anosov if there exist two continuous, ρ-equivariant boundary maps ξ : ∂_∞Γ→(V) and ξ^* : ∂_∞Γ→(V^*) such that *ξ and ξ^* are compatible, ξ(η)∈ξ^*(η) for all η∈∂_∞Γ;*ξ and ξ^* are transverse, ξ(η)∉ξ^*(η') for all η≠η' in ∂_∞Γ;*ξ and ξ^* are dynamics-preserving and there exist c,C>0 such that for any γ∈Γ,(λ_1-λ_2)(ρ(γ)) ≥ c ℓ_Γ(γ) - C,where ℓ_Γ : Γ→ is the translation length function of Γ in its Cayley graph (for some fixed choice of finite generating subset).In condition <ref> we use the notation λ_1-λ_2 from (<ref>). By dynamics-preserving we mean that for any γ∈Γ of infinite order, the element ρ(γ)∈ G is proximal and ξ (ξ^*) sends the attracting fixed point of γ in ∂_∞Γ to the attracting fixed point of ρ(γ) in (V) ((V^*)). In particular, the set ξ(∂_∞Γ) (ξ^*(∂_∞Γ)) is the proximal limit set (Definition <ref>) of ρ(Γ) in (V) ((V^*)). By <cit.>, if ρ is irreducible then condition <ref> is automatically satisfied as soon as <ref> and <ref> are, but this is not true in general (see <cit.>).If Γ is not elementary (not cyclic up to finite index), then the action of Γ on ∂_∞Γ is minimal, every orbit is dense; therefore the action of Γ on ξ(∂_∞Γ) and ξ^*(∂_∞Γ) is also minimal.We shall use a related characterization of Anosov representations, which we take from <cit.>; it also follows from <cit.>.Let Γ be a word hyperbolic group. A representation ρ : Γ→ G=(V) is P_1-Anosov if there exist two continuous, ρ-equivariant boundary maps ξ : ∂_∞Γ→(V) and ξ^* : ∂_∞Γ→(V^*)satisfying conditions <ref>–<ref> of Definition <ref>, as well as *ξ and ξ^* are dynamics-preserving and(μ_1-μ_2)(ρ(γ)) |γ|_Γ→ +∞⟶ +∞,where |·|_Γ : Γ→ is the word length function of Γ (for some fixed choice of finite generating subset).In condition <ref> we use the notation μ_1-μ_2 from (<ref>).By construction, the image of the boundary map ξ : ∂_∞Γ→(V) (ξ^* : ∂_∞Γ→(V^*)) of a P_1-Anosov representation ρ : Γ→(V) is the closure of the set of attracting fixed points of proximal elements of ρ(Γ) in (V) ((V^*). Here is a useful alternative description. We denote by (e_1,…,e_n) the standard basis of V = ^n, orthonormal for the inner product preserved by K̂ = (n).[<cit.>]Let Γ be a word hyperbolic group and ρ : Γ→(V) a P_1-Anosov representation with boundary maps ξ : ∂_∞Γ→(V) and ξ^* : ∂_∞Γ→(V^*). Let (γ_m)_m∈ be a sequence of elements of Γ converging to some η∈∂_∞Γ. For any m, choose k_m∈ K such that ρ(γ_m)∈ k_mexp(𝔞^+)K (see Section <ref>). Then, writing [e_n^*]:=(span(e_1,…,e_n-1)),{[ξ(η) = lim_m→ +∞ k_m· [e_1],;ξ^*(η) = lim_m→ +∞ k_m· [e_n^*]. ].In particular, the image of ξ is the set of accumulation points in (V) of the set{ k_ρ(γ)· [e_1]| γ∈Γ} where γ∈ k_ρ(γ)exp(𝔞^+)K; the image of ξ^* is the set of accumulation points in (V^*) of { k_ρ(γ)· [e_n^*]| γ∈⁠Γ}.Here is an easy consequence of Fact <ref>.Let Γ be a word hyperbolic group, Γ' a subgroup of Γ, and ρ : Γ→(V) a representation. * If Γ' has finite index in Γ, then ρ is P_1-Anosov if and only if its restriction to Γ' is P_1-Anosov.* If Γ' is quasi-isometrically embedded in Γ and if ρ is P_1-Anosov, then the restriction of ρ to Γ' is P_1-Anosov. § BASIC FACTS: ACTIONS ON CONVEX SUBSETS OF (V)In this section we collect a few useful facts on actions of discrete subgroups of (V) on properly convex open subsets of (V). §.§ The full orbital limit set in the strictly convex case The following elementary observation was mentioned in Section <ref>.Let Γ be a discrete subgroup of (V) preserving a nonempty properly convex open subset Ω of (V). If Ω is strictly convex, then all Γ-orbits of Ω have the same accumulation points in ∂Ω.It is enough to prove that for any points x,y∈Ω, any accumulation point of Γ· x is also an accumulation point of Γ· y. Consider (γ_m) ∈Γ^ such that (γ_m· x) converges to some x_∞∈∂Ω.After possibly passing to a subsequence, (γ_m · y) converges to some y_∞∈∂Ω. By properness of the action of Γ on Ω, the limit [x_∞, y_∞] of the sequence of compact intervals (γ_m· [x,y]) is contained in ∂Ω. Strict convexity then implies that x_∞ = y_∞, and so x_∞ is also an accumulation point of Γ· y.§.§ Divergence for actions on properly convex cones We shall often use the following observation.Let Γ be a discrete subgroup of (V) preserving a nonempty properly convex open subset Ω of (V). Then there is a unique lift Γ̂ of Γ to ^±(V) that preserves a properly convex cone Ω of V lifting Ω.Here we say that a convex open cone of V is properly convex if its projection to (V) is properly convex in the sense of Section <ref>.The following observation will be useful in Sections <ref> and <ref>.Let Γ̂ be a discrete subgroup of ^±(V) preserving a properly convex open cone Ω in V. For any sequence (γ_m)_m∈ of pairwise distinct elements of Γ̂ and any nonzero vector v ∈Ω, the sequence (γ_m· v)_m∈ goes to infinity in V as m→ +∞.This divergence is uniform as v varies in a compact set 𝒦⊂Ω.Fix a compact subset 𝒦 of Ω. Let φ∈ V^* be a linear form which takes positive values on the closure of Ω. The set Ω∩⁠{φ=⁠ 1} is bounded, with compact boundary ℬ in V. By compactness of 𝒦 and ℬ, we can find 0<ε<1 such that for any v∈𝒦 and w∈ℬ the line through v/φ(v) and w intersects ℬ in a point w'≠ w such that v/φ(v) = tw + (1-t)w' for some t≥ε. For any m∈, we then have φ(γ_m· v)/φ(v) ≥ε φ(γ_m· w), and since this holds for any w∈ℬ we obtainφ(γ_m· v) ≥κ max_ℬ(φ∘γ_m)where κ := ε min_𝒦(φ) > 0. Thus it is sufficient to see that the maximum of φ∘γ_m over ℬ tends to infinity with m. By convexity, it is in fact sufficient to see that the maximum of φ∘γ_m over Ω∩{φ<1} tends to infinity with m. This follows from the fact that the set Ω∩{φ<1} is open, that the operator norm of γ_m∈Γ̂⊂End(V) goes to +∞, and that φ |_Ω is bounded above and below by positive multiples of any given norm of V. In the case that Γ̂ preserves a quadratic form on V, the fact that (γ_m· v)_m∈ goes to infinity as m→ +∞ implies that any accumulation point of ([γ_m · v])_m∈ in (V) is isotropic. We thus get the following corollary, which will be used in Section <ref>.For p,q∈^*, let Γ be an infinite discrete subgroup of (p,q) preserving a properly convex open subset Ω of (^p+q). Then the full orbital limit set _Ω(Γ) is contained in ∂^p,q-1.§.§ Comparison between the Hilbert and Euclidean metrics The following will be used later in this section, and in the proofs of Lemmas <ref> and <ref>.Let Ω⊂(V) be open, properly convex, contained in a Euclidean affine chart (^n-1,d_Euc) of (V). If R>0 is the diameter of Ω in (^n-1,d_Euc), then the natural inclusion defines an R/2-Lipschitz map(Ω,d_Ω) ⟶ (^n-1,d_Euc). We may assume n=2 up to restricting to an affine line intersecting Ω, and R=2 up to rescaling, so that Ω= ^1 = (-1,1) ⊂. Since 1/2log-10tanh(t)1=t, the arclength parametrization of Ω is then given by the map tanh:→ (-1,1), which is 1-Lipschitz.Let Ω be a properly convex open subset of (V). Let (x_m)_m∈ and (y_m)_m∈ be two sequences of points of Ω, and z∈∂Ω. If x_m→ z and d_Ω(x_m,y_m)→ 0, then y_m→ z.§.§ Closed ideal boundary The following elementary observation will be used in Sections <ref>, <ref>, and <ref>.Let Γ be an infinite discrete subgroup of (V) preserving a properly convex open subset Ω of (V) and a subsetof Ω. If Γ\ is closed in Γ\Ω, thenis closed in Ω and = ∩∂Ω; this is the case in particular if the action of Γ on  is cocompact. In Sections <ref> and <ref> we shall consider convex subsetsof (V) that are not assumed to be subsets of a properly convex open set Ω. An arbitrary convex subsetof (V) (not necessarily open nor closed) is locally compact for the induced topology if and only if its ideal boundary = ∖ is closed in (V). In the non-locally compact setting, there are several possible definitions for proper discontinuity and cocompactness, of varying strengths, see <cit.> or <cit.>. We will use the following definitions: the action of a discrete group Γ on a topological spaceis properly discontinuous if for any compact subset K of , the set of elements γ∈Γ such that K ∩γ· K ≠∅ is finite. The action is cocompact if there exists a compact subset 𝒟 of  such that = ⋃_γ∈Γγ·𝒟.Then the following holds.Let Γ be a discrete subgroup of (V) anda Γ-invariant convex subset of (V). Suppose the action of Γ on  is properly discontinuous and cocompact. Thenis closed in (V).Let (z_m)_m∈ be a sequence of points ofconverging to some z∈(). Suppose for contradiction that z ∉, so that z ∈. For each m, let (y_m,k)_k∈ be a sequence of points of  converging to z_m as k → +∞. Let 𝒟⊂ be a compact subset such that = ⋃_γ∈Γγ·𝒟. For any m,k∈, there exists γ_m,k∈Γ such that γ_m,k· y_m,k∈𝒟. Note that for each m, the collection {γ_m,k}_k=1^∞ is infinite. We now choose a sequence (y_m,k_m)_m∈ inductively as follows. Fix an auxiliary metric d(·, ·) on (V). Let k_1 be such that d(y_1,k_1, z_1) < 1. For each m > 1, let k_m be such that d(y_m,k_m, z_m) < 1/m and γ_m, k_m is distinct from all γ_1,k_1, …, γ_m-1, k_m-1.Then y_m,k_m converges to z as m→ +∞ and the sequence (γ_m,k_m)_m=1^∞ is injective. The compact subset {z}∪{y_m,k_m}_m∈ ofthen has the property that its translate by any of the infinitely many elements {γ_m,k_m}_m=1^∞ intersects the compact set 𝒟. This contradicts properness.§.§ Nonempty interior The following observation shows that in the setting of Definition <ref> we may always assumeto have nonempty interior.Let Γ be an infinite discrete subgroup of (V) preserving a properly convex open subset Ω of (V) and acting cocompactly on some nonempty closed convex subsetof Ω. For R>0, let _R be the closed uniform R-neighborhood ofin (Ω,d_Ω). Then _R is a closed convex subset of Ω with nonempty interior on which Γ acts cocompactly.The set _R is properly convex by <cit.>. The group Γ acts properly discontinuously on _R since it acts properly discontinuously on Ω, and cocompactly on _R since it acts cocompactly on : the set _R is the union of the Γ-translates of the closed uniform R-neighborhood of a compact fundamental domain ofin (Ω,d_Ω).§.§ Maximal invariant convex setsThe following was first observed by Benoist <cit.> for discrete subgroups of (V) acting irreducibly on (V). Here we do not make any irreducibility assumption.Let Γ be a discrete subgroup of (V) preserving a nonempty properly convex open subset Ω of (V) and containing a proximal element. Let Λ_Γ (Λ_Γ^*) be the proximal limit set of Γ in (V) ((V^*)) (Definition <ref>). Then *Λ_Γ (Λ_Γ^*) is contained in the boundary of Ω (its dual Ω^*); *more specifically, Ω and Λ_Γ lift to cones Ω and Λ_Γ of V∖{0} with Ω properly convex containing Λ_Γ in its boundary, and Ω^* and Λ_Γ^* lift to cones Ω^* and Λ_Γ^* of V^*∖{0} with Ω^* properly convex containing Λ_Γ^* in its boundary, such that φ(v)≥ 0 for all v∈Λ_Γ and φ∈Λ_Γ^*; *for Λ_Γ^* as in (<ref>), the setΩ_max = ({ v∈ V  | φ(v)>0∀φ∈Λ_Γ^*})is the unique connected component of(V) ∖⋃_z^*∈Λ_Γ^* z^* containing Ω; it is Γ-invariant, convex, and open in (V); any Γ-invariant properly convex open subset Ω' of (V) containing Ω is contained in Ω_max. (<ref>) Let γ∈Γ be proximal in (V), with attracting fixed point z_γ^+ and complementary γ-invariant hyperplane H_γ^-. Since Ω is open, there exists x∈Ω∖ H_γ^-. We then have γ^m· x→ z_γ^+, and so z_γ^+∈∂Ω since the action of Γ on Ω is properly discontinuous. Thus Λ_Γ⊂∂Ω. Similarly, Λ_Γ^*⊂∂Ω^*.(<ref>) The set Ω lifts to a properly convex cone Ω of V∖{0}, unique up to global sign. This determines a cone Λ_Γ of V∖{0} lifting Λ_Γ⊂∂Ω and contained in the boundary of Ω. By definition, Ω^* is the projection to (V^*) of the dual coneΩ^* := {φ∈ V^*∖{0} | φ(v)>0∀ v∈Ω∖{0}}.This cone determines a cone Λ^*_Γ of V^*∖{0} lifting Λ_Γ^*⊂∂Ω^* and contained in the boundary of Ω^*. By construction, φ(v)≥ 0 for all v∈Λ_Γ and φ∈Λ_Γ^*.(<ref>) The set Ω_max := ({ v∈ V| φ(v)>0 ∀φ∈Λ_Γ^*}) is a connected component of (V) ∖⋃_z^*∈Λ_Γ^* z^*. It is convex, open in (V) by compactness of Λ_Γ^*, and it contains Ω. The action of Γ, which permutes the connected components of (V) ∖⋃_z^*∈Λ_Γ^* z^*, preserves Ω_max because it preserves Ω. Moreover, any Γ-invariant properly convex open subset Ω' of (V) containing Ω is contained in Ω_max: indeed, Ω' cannot meet z^* for z^*∈Λ_Γ^* since Λ_Γ^*⊂∂Ω'^* by (<ref>). In the context of Proposition <ref>, when Γ preserves a nonempty properly convex open subset Ω of (V) but does not act irreducibly on (V), the following may happen: * Γ may not contain any proximal element in (V): this is the case if V = V'⊕ V' for some vector space V' and Γ⊂(V) is the image of a diagonal embedding of a discrete group Γ̂'⊂^±(V') preserving a properly convex open set in (V'); * assuming that Γ contains a proximal element in (V) (hence Λ_Γ,Λ_Γ^*≠⁠∅), the set Ω_max of Proposition <ref>.(<ref>) may fail to be properly convex: this is the case if Γ is a convex cocompact subgroup of (2,1)_0, embedded into (^4) where it preserves the properly convex open set ^3 ⊂(^4) of Example <ref>; * even if Ω_max is properly convex, it may not be the unique maximal Γ-invariant properly convex open set in (V): indeed, there may be multiple components of (V) ∖⋃_z^*∈Λ_Γ^* z^* that are properly convex and Γ-invariant, as in Example <ref> below.However, if Γ acts irreducibly on (V), then the following holds.[<cit.>]Let Γ be a discrete subgroup of (V) acting irreducibly on (V) and preserving a nonempty properly convex open subset Ω of (V). Then *Γ always contains a proximal element and the set Ω_max of Proposition <ref>.(<ref>) is always properly convex (see Figure <ref>); it is a maximal Γ-invariant properly convex open subset of (V) containing Ω; *if moreover Γ acts strongly irreducibly on (V) (all finite-index subgroups of Γ act irreducibly), then Ω_max is the unique maximal Γ-invariant properly convex open set in (V); it contains all other invariant properly convex open subsets; *in general, there is a smallest nonempty Γ-invariant convex open subset Ω_min of Ω_max, namely the interior of the convex hull of Λ_Γ in Ω_max.In general, the following strengthening of Proposition <ref>.(<ref>) holds.Let Γ be a discrete subgroup of (V) preserving a nonempty properly convex open subset Ω of (V). Then the proximal limit set Λ_Γ is contained in the set of accumulation points of any Γ-orbit of Ω; in particular, Λ_Γ is contained in the full orbital limit set _Ω(Γ) (Definition <ref>).For any proximal element γ∈Γ, the repelling hyperplane of γ is disjoint from (in fact, tangent to) Ω, by Proposition <ref>.(<ref>).Therefore, for any point x∈Ω, the sequence (γ^m · x)_m∈ℕ converges to the attracting fixed point of γ. A diagonal extraction argument shows that Γ· x contains the whole proximal limit set Λ_Γ. §.§ The case of a connected proximal limit set We make the following observation.Let Γ be an infinite discrete subgroup of (V) preserving a nonempty properly convex open subset Ω of (V). Suppose the proximal limit set Λ_Γ^* of Γ in (V^*) (Definition <ref>) is connected. Then the set (V) ∖⋃_z^*∈Λ_Γ^* z^* is a nonempty Γ-invariant convex open subset of (V), not necessarily properly convex, but containing all Γ-invariant properly convex open subsets of (V); it is equal to the set Ω_max of Proposition <ref>. This lemma will follow from a basic fact about well-definedness of convex hulls in projective space.*Let L be a closed, connected subset of (V), and let H≠ H' be two hyperplanes in (V) disjoint from L. Then the convex hull of L taken in the affine chart (V) ∖ H agrees with the convex hull of L taken in the affine chart (V) ∖ H'. *Let L^* be a closed, connected subset of (V^*). Then, thinking of each z^* ∈(V^*) as a hyperplane in (V), the open set (V) ∖⋃_z^*∈ L^* z^* has at most one connected component, which is convex. (<ref>) The set (V) ∖ (H ∪ H') has two connected components, each of which is convex. Since L is connected, it must be contained in exactly one of these, which we denote by 𝒪. The convex hull of Λ in (V) ∖ H, or in (V) ∖ H', agrees with the convex hull of Λ in 𝒪.(<ref>) Let Ω be a connected component of (V) ∖⋃_z^*∈ L^* z^*. The dual convex set Ω^* contains L^* in its closure. In fact, if x ∈Ω, then Ω^* is the convex hull of L^*, taken in the affine chart (V^*) ∖ x, where we think of x ∈(V) = ((V^*)^*) as a hyperplane in (V^*).Since L^* is connected, it now follows from (<ref>) that the open set (V) ∖⋃_z^*∈ L^* z^* can have at most one connected component.By Lemma <ref>, since Λ_Γ^* is connected, the set (V) ∖⋃_z^*∈Λ_Γ^* z^* has either one or zero connected components. It contains Ω, hence is nonempty, so it must have exactly one component, which is convex.§ CONVEX COCOMPACT AND NAIVELY CONVEX COCOMPACT SUBGROUPSIn this section we investigate the difference between naively convex cocompact groups (Definition <ref>) and convex cocompact groups in (V) (Definition <ref>). We also study a notion of conical limit points and how it relates to faces of the domain Ω, and study minimality properties of the convex core.§.§ ExamplesThe following basic examples are designed to make the notions of convex cocompactness and naive convex cocompactness in (V) more concrete, and to point out some subtleties.Let V=^n with standard basis (e_1,…,e_n), where n≥ 2. Let Γ≃^n/≃^n-1 be the discrete subgroup of (V) of diagonal matrices whose entries are powers of some fixed t>⁠ 1; it is not word hyperbolic if n≥ 3. The hyperplanesH_k = (span(e_1,…,e_k-1,e_k+1,…,e_n))for 1≤ k≤ n cut (V) into 2^n-1 properly convex open connected components Ω, which are not strictly convex if n≥ 3 (see Figure <ref>). The group Γ acts properly discontinuously and cocompactly on each of them, hence is convex cocompact in (V) (see Example <ref>). In Example <ref> the set of accumulation points of one Γ-orbit of Ω depends on the choice of orbit (see Figure <ref>), but the full orbital limit set _Ω(Γ)=∂Ω depends only on Ω. The proximal limit set Λ_Γ (Definition <ref>) is the set of extreme points ofΩ and is the projectivized standard basis, independently of the choice of Ω.Here is an irreducible variant of Example <ref>, containing it as a finite-index subgroup. It shows that even when Γ acts irreducibly on (V), it may preserve and divide several disjoint properly convex open subsets of (V). (This does not happen under strong irreducibility, see Fact <ref>.(<ref>).)Let F be a finite group acting transitively on a set I of cardinality n≥ 2.Let V=^I≃^n, and Γ≃^I/ be as in Example <ref>. Then Γ:=F⋉Γ acts irreducibly on (V), preserving the positive orthant Δ_+:=(_>0^I). For any index-two subgroup F' of F whose restricted action has two orbits I',I” partitioning I, the same group Γ also preserves and acts cocompactly on the orthantΔ_F':=(_>0^I'×_<0^I”). There may be many such F'⊂ F, any index-two subgroup for F=I=(/2)^ν with ν∈^*. Here is an example showing that a discrete subgroup which is convex cocompact in (V) need not act convex cocompactly on every invariant properly convex open subset of (V). Suppose Γ is a convex cocompact subgroup (in the classical sense) of (p-1,1)⊂(p,1) ⊂(^p+1). Then Γ acts convex cocompactly (Definition <ref>), and even strongly convex cocompactly (Definition <ref>), on the properly convex open set ^p, hence Γ is strongly convex cocompact in (^p+1). Note that the set Λ_Γ is contained in the equatorial sphere ∂^p-1 of ∂^p⊂(^p+1). Let Ω be one of the two Γ-invariant hyperbolic half-spaces of ^p bounded by ^p-1. Then _Ω(Γ) ⊂_^p(Γ) ∩Ω = ∅. Hence Γ does not act convex cocompactly on Ω. The following basic examples, where Γ is a cyclic group acting on the projective plane (^3), may be useful to keep in mind.Let V=^3, with standard basis (e_1,e_2,e_3). Let Γ be a cyclic group generated by an element γ∈(V). *Suppose γ=([ a 0 0; 0 b 0; 0 0 c ]) where a > b > c > 0. Then γ has attracting fixed point [e_1] and repelling fixed point [e_3]. There is a Γ-invariant properly convex open neighborhood Ω of an open segment ([e_1], [e_3]) connecting [e_1] to [e_3]. The full orbital limit set _Ω(Γ) is just {[e_1],[e_3]} and its convex hull _Ω(Γ) = ([e_1], [e_3]) has compact quotient by Γ (a circle). Thus Γ is convex cocompact in (V). *Suppose γ=([ a 0 0; 0 b 0; 0 0 b ]) where a > b > 0. Any Γ-invariant properly convex open set Ω is a triangle with tip [e_1] and base an open segment I of the line (span(e_2,e_3)).For any x∈Ω, the Γ-orbit of x has two accumulation points, namely [e_1] and the intersection of I with the projective line through [e_1] and x.Therefore, the full orbital limit set _Ω(Γ) consists {[e_1]}∪ I, hence the convex hull _Ω(Γ) of _Ω(Γ) is the whole of Ω, and Γ does not act cocompactly on it. Thus Γ is not convex cocompact in (V). However, Γ is naively convex cocompact in (V) (Definition <ref>): it acts cocompactly on the convex hullin Ω of {[e_1]} and of any closed segment I'⊂ I. The quotient Γ\ is a closed convex projective annulus (or a circle if I' is reduced to a singleton). *Suppose γ=([ 2 0 0; 0 1 t; 0 0 1 ]) where t>0. Then there exist nonempty Γ-invariant properly convex open subsets of (V), for instance Ω_s = ({ (v_1,v_2,1)|v_1>s 2^v_2/t}) for any s>0. However, for any such set Ω, we have _Ω(Γ)={[e_1],[e_2]}, and the convex hull of _Ω(Γ) in Ω is contained in ∂Ω (see <cit.>). Hence _Ω(Γ) is empty and Γ is not convex cocompact in (V). It is an easy exercise to check that Γ is not even naively convex cocompact in (V). *Suppose γ=([a00;0bcosθ -bsinθ;0bsinθbcosθ ]) where a > b > 0 and 0 < θ≤π. Then Γ does not preserve any nonempty properly convex open subset of (V) (see <cit.>).* Convex cocompactness in (V) is not a closed condition in general. Indeed, Example <ref>.(<ref>), which is convex cocompact, can limit to Example <ref>.(<ref>), which is not. *Naive convex cocompactness (Definition <ref>) is not an open condition (even if we require the cocompact convex subset ⊂Ω to have nonempty interior, which we can always do by Lemma <ref>). Indeed, Example <ref>.(<ref>), which satisfies the condition, is a limit of both <ref>.(<ref>) and <ref>.(<ref>), which do not.Here is a slightly more complicated example of a discrete subgroup of (V) which is naively convex cocompact but not convex cocompact in (V). Let Γ_1 be a convex cocompact subgroup of (2,1) ⊂(^3), let Γ_2≃ be a discrete subgroup of (^1)≃^* acting on ^1 by scaling, and let Γ=Γ_1×Γ_2⊂(V), where V:=^3⊕^1. Any Γ-invariant properly convex open subset Ω of (^3 ⊕^1) is a cone with base some Γ_1-invariant properly convex open subset Ω_1 of (^3) ⊂(V) and tip z := (^1) ⊂(V). The full orbital limit set _Ω(Γ) contains both {z} and the full base Ω_1, hence _Ω(Γ) is equal to Ω. Therefore Γ acts convex cocompactly on Ω if and only if Γ_1 divides Ω_1, if and only if Γ_1 is cocompact (not just convex cocompact) in (2,1). On the other hand, Γ is always naively convex cocompact in (V): it acts cocompactly on the closed convex subcone ⊂Ω with tip z and base the convex hull in ^2 of the limit set of Γ_1. In further work <cit.>, we shall describe examples of discrete subgroups of (V) which are naively convex cocompact but not convex cocompact in (V), and which act irreducibly on (V). This includes free discrete subgroups of (^4) containing Example <ref>.(<ref>) as a free factor. §.§ Finite-index subgroupsWe observe that the notions of convex cocompactness and naive convex cocompactness in (V) behave well with respect to finite-index subgroups.Let Γ be a discrete subgroup of (V) preserving a properly convex open subset Ω of (V), and let Γ' be a finite-index subgroup of Γ. Then * _Ω(Γ)=_Ω(Γ'); in particular, Γ' is convex cocompact in (V) if and only if Γ is;* if Γ' acts cocompactly on some closed convex subset ' of Ω, then Γ acts cocompactly on some closed convex subsetof Ω; in particular, Γ' is naively convex cocompact in (V) if and only if Γ is. Write Γ as the disjoint union of cosets Γ'γ_1,…,Γ'γ_m where γ_i∈Γ. Since any orbit Γ· x, for x∈Ω, is a union of m orbits Γ'·(γ_i · x), we have _Ω(Γ) = _Ω(Γ'), hence _Ω(Γ)=_Ω(Γ').Suppose Γ' acts cocompactly on some closed convex subset ' of Ω, and let 𝒟' ⊂' be a compact fundamental domain for this action. There exists a closed uniform neighborhood '_𝗎𝗇𝗂𝖿 of ' in (Ω,d_Ω) such that ⋃_i=1^m γ_i·𝒟' ⊂'_𝗎𝗇𝗂𝖿. The group Γ' acts cocompactly on '_𝗎𝗇𝗂𝖿 (Lemma <ref>). We have Γ·' = ⋃_i=1^m Γ'γ_i·𝒟' ⊂'_𝗎𝗇𝗂𝖿 since '_𝗎𝗇𝗂𝖿 is Γ'-invariant. Let ⊂'_𝗎𝗇𝗂𝖿 be the convex hull of Γ·' in Ω. Thenis Γ-invariant and the action of Γ' (hence of Γ) on  is cocompact.§.§ Conical limit points We use the following terminology.Let Ω be a nonempty properly convex open subset of (V) and let z∈∂Ω. * A sequence (y_m)_m∈∈Ω^ converges conically to z if y_m→ z and the supremum over m∈ of the distances d_Ω(y_m, [y,z)) is finite for some (hence any) y∈Ω.Suppose in addition that Ω is preserved by an infinite discrete subgroup Γ of (V). * The point z is a conical limit point of the Γ-orbit of some point y∈Ω if there exists a sequence (γ_m)∈Γ^ such that (γ_m· y)_m∈ converges conically to z. In this case we say that z is a conical limit point of Γ in ∂Ω. * The conical limit set of Γ in ∂Ω is the set _Ω(Γ) of conical limit points of Γ in ∂Ω.Here [y,z) denotes the projective ray of Ω starting from y with endpoint z. By definition, the conical limit set _Ω(Γ) is contained in the full orbital limit set _Ω(Γ).The following observation will have several important consequences.Let Γ be an infinite discrete subgroup of (V) and Ω a nonempty Γ-invariant properly convex open subset of (V). For z∈∂Ω and a sequence (γ_m)∈Γ^ of pairwise distinct elements, if there exists y∈Ω such that d_Ω(γ_m· y, [y,z)) is bounded, then there exists y'∈Ω in the closure of ⋃_γ∈Γγ· [y,z) in Ω such that a subsequence of (γ_m· y')_m∈ converges conically to z; in particular, z∈_Ω(Γ).Suppose there exist a sequence (γ_m)∈Γ^ of pairwise distinct elements, a point y∈Ω, and a sequence (y_m)_m∈ of points of [y,z) such that d_Ω(γ_m· y,y_m) is bounded. Then d_Ω(y,γ_m^-1· y_m) is bounded, and so, up to passing to a subsequence, we may assume that γ_m^-1· y_m→ y' for some y'∈Ω. We have y_m→ z and d_Ω(y_m,γ_m· y')→ 0, hence γ_m· y'→ z by Corollary <ref>. Moreover, d_Ω(γ_m· y', [y',z)) is bounded. Indeed, d_Ω(γ_m· y',γ_m· y) = d_Ω(y',y) and d_Ω(γ_m· y, [y,z)) is bounded, hence d_Ω(γ_m· y', [y,z)) is bounded by the triangle inequality. We conclude using the fact that the rays [y,z) and [y',z) remain at bounded distance for d_Ω. Let Γ be an infinite discrete subgroup of (V) and Ω a nonempty Γ-invariant properly convex open subset of (V). Suppose Γ acts cocompactly on some closed convex subsetof Ω. Then *the ideal boundaryis contained in _Ω(Γ); more precisely, any point ofis a conical limit point of the Γ-orbit of some point of ;*_Ω(Γ) = _Ω(Γ);*ifcontains _Ω(Γ), then = _Ω(Γ) = _Ω(Γ); in that case, the set _Ω(Γ) is closed in (V), the set _Ω(Γ) is closed in Ω, and the action of Γ on Ω is convex cocompact (Definition <ref>).As a special case, Corollary <ref>.(<ref>) with =Ω shows that if Γ divides (acts cocompactly on) Ω, then _Ω(Γ) = ∂Ω and _Ω(Γ) = Ω (this also follows from <cit.>) and the action of Γ on Ω is convex cocompact in the sense of Definition <ref>. (<ref>) Sinceis convex and the action of Γ on  is cocompact, for z∈ and y∈, all points in the ray [y,z) lie at uniformly bounded distance from Γ· y. We conclude using Lemma <ref>.(<ref>) Let z∈_Ω(Γ): it is a limit of points in Γ· y for some y∈Ω. Let _𝗎𝗇𝗂𝖿 be a closed uniform neighborhood of  in (Ω,d_Ω) containing y. The group Γ still acts cocompactly on _𝗎𝗇𝗂𝖿, and z∈_𝗎𝗇𝗂𝖿 (Lemma <ref>), hence z∈_Ω(Γ) by (<ref>).(<ref>) Ifcontains _Ω(Γ), then ⊃_Ω(Γ)⊃_Ω(Γ), and so = _Ω(Γ) = _Ω(Γ) by (<ref>). In particular, _Ω(Γ) is closed in (V) by Remark <ref>, and so its convex hull _Ω(Γ) is closed in Ω, and compact modulo Γ becauseis.§.§ Open faces of ∂Ω We shall use the following terminology.For any properly convex open subset Ω of (V) and any z∈∂Ω, the open face F_∂Ω(z) of ∂Ω at z is the union of {z} and of all open segments of ∂Ω containing z. In other words, F_∂Ω(z) is the largest convex subset of ∂Ω containing z which is relatively open, in the sense that it is open in the projective subspace (W) that is spans. In particular, we can consider the Hilbert metric d_F_∂Ω(z) on F_∂Ω(z) seen as a properly convex open subset of (W) (if F_∂Ω(z) = {z}, take d_F_∂Ω(z):=0). The set ∂Ω is the disjoint union of its open faces.Here is a consequence of Lemma <ref>.Let Γ be a discrete subgroup of (V) and Ω a nonempty Γ-invariant properly convex open subset of (V). Then the conical limit set _Ω(Γ) is a union of open faces of ∂Ω.Suppose z∈_Ω(Γ): there exist y∈Ω and (γ_m)∈Γ^ such that (γ_m· y)_m∈ converges conically to z. For any z'∈ F_∂Ω(z), the rays [y,z) and [y,z') remain at bounded distance for d_Ω, hence d_Ω(γ_m· y, [y,z')) is bounded. By Lemma <ref>, there exists y'∈Ω such that some subsequence of (γ_m· y')_m∈ converges conically to z', and so z'∈_Ω(Γ). Corollaries <ref>.(<ref>) and <ref> immediately yield the following.Let Γ be a discrete subgroup of (V) and Ω a Γ-invariant properly convex open subset of (V). If Γ acts cocompactly on some nonempty closed convex subsetof Ω, then the full orbital limit set _Ω(Γ) is a union of open faces of ∂Ω. The following lemma will be used in the proofs of Lemma <ref> and Proposition <ref>.(<ref>) below. We endow any open face F of ∂Ω with its Hilbert metric d_F.Let Ω be a nonempty properly convex open subset of (V) and let R>0. *Let (x_m)_m∈ and (x'_m)_m∈ be sequences of points of Ω converging respectively to points z and z' of ∂Ω. If d_Ω(x_m,x'_m)≤ R for all m∈, then z and z' belong to the same open face F of ∂Ω and d_F(z,z') ≤ R. *Letbe a nonempty closed convex subset of Ω and let _R be the closed uniform R-neighborhood ofin (Ω,d_Ω). Then for any open face F of ∂Ω, the set _R ∩ F is equal to the closed uniform R-neighborhood of ∩ F in (F,d_F).(<ref>) We may assume z≠ z'.For any m∈, let a_m, b_m ∈∂Ω be such that a_m, x_m, x'_m, b_m are aligned in this order. Up to passing to a subsequence, we may assume that (a_m)_m∈, (b_m)_m∈ converge respectively to a,b ∈∂Ω, with a, z, z', b aligned in this order.By continuity of the cross-ratio, lim_m a_mx_mx'_mb_m = azz'b. Using d_Ω(x_m,x'_m)≤ R and the definition (<ref>) of the Hilbert metric, we deduce a≠ z and z'≠ b and lim_m d_Ω(x_m, x'_m) = d_(a,b)(z,z'), where d_(a,b) is the Hilbert metric on the interval (a,b). The interval (a,b) (hence also z and z') is contained in some open face F of ∂Ω. We have d_F(z,z') ≤ d_(a,b)(z,z') (Remark <ref>), with equality if and only if a and b both lie in the boundary of F. Thus d_F(z,z') ≤lim_m d_Ω(x_m, x'_m) ≤ R.(<ref>) By (<ref>), the set _R ∩ F is contained in the closed uniform R-neighborhood of ∩ F in (F,d_F). Let us prove the reverse inclusion. If F is a singleton, there is nothing to prove. If not, consider z ∈∩ F andz' ∈ F with 0<d_F(z,z') ≤ R. In order to check that z' ∈_R, it is enough to choose a point x ∈ and check that the ray [x,z') is contained in _R. Let Ω' (F') be the intersection of Ω (F) with the two-dimensional projective subspace of (V) spanned by x, z, z'. Observe that F' = (a,b) is an interval and that d_F' is equal to d_F on F' (see Figure <ref>). For any y' ∈ [x,z'), let y ∈ [x,z) ⊂ be such that the line between y, y' does not intersect F' (choose y to be the intersection with (x,z) of the line through y' parallel to the direction of F' in some affine chart of (W) containing Ω').If T ⊂Ω' denotes the open triangle spanned by x, a, b, then d_Ω(y,y') ≤ d_T(y,y') = d_F'(z,z') ≤ R, hence y' ∈_R.The way that Hilbert distances behave as points approach an open face F of ∂Ω can be quite subtle. Let (x_m)_m∈ be a sequence of points in Ω converging to some z ∈ F, and consider the sequence of closed R-balls B_d_Ω(x_m,R). By Lemma <ref>.(<ref>), their limit in the Hausdorff topology (if it exists) is some closed subset of F contained in the closed R-ball B_d_F(z,R). However, it is sometimes the case that the containment is strict: see Appendix <ref>. In the case that x_m → z conically, it can be shown that the limit contains some ball around z, but the radius may be smaller than R: we give precise bounds in Lemma <ref>. In general, the limit may fail to contain a neighborhood of z in F (see Example <ref>).§.§ Minimality of the convex core for convex cocompact actionsRecall that _Ω(Γ) denotes the convex core of Ω for Γ, the convex hull of _Ω(Γ) in Ω.Let Γ be a discrete subgroup of (V) acting convex cocompactly (Definition <ref>) on some nonempty properly convex open subset Ω of (V). Then any nonempty Γ-invariant closed convex subsetof Ω contains _Ω(Γ). In particular, the nonempty Γ-invariant closed convex subsets of Ω on which the action of Γ is cocompact are exactly those nested between _Ω(Γ) and some uniform neighborhood of _Ω(Γ) in (Ω,d_Ω).Consider a nonempty closed convex Γ-invariant subsetof Ω.Let us prove thatcontains _Ω(Γ). By definition of _Ω(Γ), it is sufficient to prove thatcontains _Ω(Γ). For this, it is sufficient to prove that _Ω(Γ) ∩ F ⊂∩ F for any open face F of ∂Ω. Since the action of Γ on _Ω(Γ) is cocompact, there exists R > 0 such that _Ω(Γ) is contained in the closed uniform R-neighborhood _R ofin (Ω,d_Ω). We have _Ω(Γ) ⊂_Ω(Γ) ⊂_R. Let F be an open face of ∂Ω. Then _Ω(Γ) ∩ F ⊂_R ∩ F.By Corollary <ref>, the set _Ω(Γ) ∩ F is either empty (in which case there is nothing to prove) or equal to F. Suppose _Ω(Γ) ∩ F = F and hence _R ∩ F = F. By Lemma <ref>.(<ref>), the set _R ∩ F is the closed uniform R-neighborhood ofin (F,d_F). Since the Hilbert metric d_F is proper on F, it follows that ∩ F = F. This shows that ⊃_Ω(Γ) and so ⊃_Ω(Γ).Moreover, the action of Γ on  is cocompact if and only ifis contained in a uniform neighborhood of _Ω(Γ) in (Ω,d_Ω).Let Γ be a discrete subgroup of (V) dividing (acting cocompactly on) some nonempty properly convex open subset Ω of (V). Then any Γ-invariant properly convex open subset of (V) intersecting Ω nontrivially is equal to Ω.Let Ω' be a Γ-invariant properly convex open subset of (V) intersecting Ω nontrivially. By Corollary <ref>.(<ref>), we have ∂Ω = _Ω(Γ) and _Ω(Γ) = Ω. By Lemma <ref>, we have Ω∩Ω' = Ω, and so Ω⊂Ω'. By Remark <ref> applied to (Ω',Ω) instead of (Ω,), we have ∂Ω = ∂Ω' ∩Ω, hence Ω = Ω'.§.§ On which convex sets Ω is the action convex cocompact? Here is a consequence of Corollary <ref>.(<ref>).Let Γ be a discrete subgroup of (V) preserving two nonempty properly convex open subsets Ω⊂Ω' of (V). *Suppose the action of Γ on Ω is convex cocompact. Then the action of Γ on Ω' is convex cocompact, _Ω(Γ) = _Ω'(Γ), and _Ω(Γ) = _Ω'(Γ). *Suppose the action of Γ on Ω' is convex cocompact. Then the action of Γ on Ω is convex cocompact if and only if _Ω(Γ) = _Ω'(Γ), if and only if Ω contains _Ω'(Γ). (<ref>) We first check that the inclusion _Ω(Γ) ⊂_Ω'(Γ) is an equality. For this it is sufficient to check that for any open face F' of ∂Ω' meeting _Ω'(Γ), we have F' ∩⁠_Ω(Γ) = F'. The set F' ∩_Ω(Γ) is nonempty by Lemma <ref>.(<ref>). Since Γ acts cocompactly on _Ω(Γ), there exists ε>0 such that the uniform ε-neighborhood _ε of _Ω(Γ) in (Ω',d_Ω') is contained in Ω. The uniform ε-neighborhood of F' ∩_Ω(Γ) in (F',d_F') is equal to _ε by Lemma <ref>.(<ref>), hence to F' ∩_Ω(Γ) itself by Corollary <ref>.(<ref>), and so F' ∩_Ω(Γ) = F'. This shows that _Ω(Γ) = _Ω'(Γ). We deduce that _Ω'(Γ) is the closure of _Ω(Γ) in Ω'. Since the action of Γ on _Ω(Γ) is cocompact by assumption, the set _Ω(Γ) must already be closed in Ω' (Remark <ref>), hence _Ω'(Γ) = _Ω(Γ) and the action of Γ on Ω' is convex cocompact.(<ref>) If the action of Γ on Ω is convex cocompact, then _Ω(Γ) = _Ω'(Γ) by (<ref>), and so Ω contains _Ω'(Γ). Conversely, if Ω contains _Ω'(Γ), then _Ω'(Γ) is a closed convex subset of Ω containing _Ω(Γ) on which Γ acts cocompactly; by Corollary <ref>.(<ref>), the action of Γ on Ω is convex cocompact. In the irreducible case, we obtain the following description of the convex sets on which the action is convex cocompact. Using Fact <ref>.(<ref>), it shows that if Γ acts strongly irreducibly on (V) and is convex cocompact in (V), then the set _Ω(Γ) is the same for all properly convex open subsets Ω of (V) on which Γ acts convex cocompactly.Let Γ be an infinite discrete subgroup of (V) acting convex cocompactly on a nonempty properly convex open subset Ω of (V). Suppose that Γ contains a proximal element, so that the maximal convex open set Ω_max⊃Ω of Proposition <ref> is well defined, and suppose that Ω_max is properly convex (this is always the case if Γ acts irreducibly on (V), see Fact <ref>). Then *the properly convex open subsets Ω' of Ω_max on which Γ acts convex cocompactly are exactly those containing _Ω_max(Γ); they satisfy _Ω'(Γ) = _Ω_max(Γ) and _Ω'(Γ) = _Ω_max(Γ); *if Γ acts irreducibly on (V), then_Ω_max(Γ) = ∂Ω_min∩∂Ω_maxand_Ω_max(Γ) = Ω_min∩Ω_max,where Ω_min⊂Ω is the minimal nonempty Γ-invariant properly convex set given by Fact <ref>. Thus, for any properly convex open subset Ω' of Ω_max on which Γ acts convex cocompactly, the convex core _Ω'(Γ) is the convex hull in Ω' of the proximal limit set Λ_Γ. (<ref>) This is an immediate consequence of Proposition <ref>.(<ref>) Suppose Γ acts irreducibly on (V). By Remark <ref>, the set := _Ω_max(Γ) is closed in Ω_max. The interior () is nonempty since Γ acts irreducibly on (V), and () contains Ω_min. In fact () = Ω_min: indeed, if Ω_min were strictly smaller than (), then the closure of Ω_min inwould be a nonempty strict closed subset ofon which Γ acts properly discontinuously and cocompactly, contradicting Lemma <ref>. Thus = Ω_min∩Ω_max, which is the convex hull of the proximal limit set Λ_Γ in Ω_max. Moreover, by Corollary <ref>.(<ref>) we have _Ω_max(Γ) == Ω_min∩∂Ω_max. For an arbitrary discrete subgroup Γ of (V) acting irreducibly on (V) and preserving a nonempty properly convex open subset Ω of (V), the convex hull _Ω(Γ) of the full orbital limit set _Ω(Γ) may be larger than the convex hull Ω_min∩Ω_max of the proximal limit set Λ_Γ in Ω. This happens for instance if Γ is naively convex cocompact in (V) (Definition <ref>) but not convex cocompact in (V). Examples of such behavior will be given in the forthcoming paper <cit.>. §.§ Convex cocompactness and conicality By Corollary <ref>.(<ref>), if Γ acts convex cocompactly on Ω, then the full orbital limit set _Ω(Γ) consists entirely of conical limit points.We now investigate the converse; this is not needed anywhere in the paper. We also refer the reader to recent work of Weisman <cit.>, in which conical convergence is studied further in relation to an expansion condition at faces of the full orbital limit set.When Ω is strictly convex with boundary of class C^1, the property that _Ω(Γ) consist entirely of conical limit points implies that the action of Γ on Ω is convex cocompact by <cit.>. In general, the following holds, as was pointed out to us by Pierre-Louis Blayac. Let Γ be an infinite discrete subgroup of (V) and Ω a nonempty Γ-invariant properly convex open subset of (V). *Ifis a nonempty Γ-invariant closed convex subset of Ω whose ideal boundaryconsists entirely of conical limit points of Γ in ∂Ω, then the action of Γ on  is cocompact.*In particular, if _Ω(Γ) is closed in Ω and _Ω(Γ) consists entirely of conical limit points of Γ in ∂Ω, then the action of Γ on Ω is convex cocompact. (<ref>) Fix a point x_0∈ and, by analogy with Dirichlet domains, let𝒟 := { x∈ |  d_Ω(x,x_0) ≤ d_Ω(x,γ· x_0) ∀γ∈Γ} .Note that Γ·𝒟 =. Therefore, it is sufficient to prove that 𝒟 is compact. Suppose by contradiction that this is not the case: there exists a sequence (x_m)∈𝒟^ such that d_Ω(x_m,x_0)→ +∞. Up to passing to a subsequence, we may assume that (x_m)_m∈ converges to some point z∈. By assumption z is a conical limit point of Γ in ∂Ω: there exist R>0, and a sequence (γ_m)∈Γ^ of pairwise distinct elements such that for any m∈ we can find a point y_m on the ray [x_0,z) with d_Ω(y_m,γ_m· x_0)≤ R. Then y_m→ z, and so we can find m such that d_Ω(x_0,y_m)≥ R+2. Let us fix such an m. Since x_k→ z as k→ +∞, we can find a sequence (x'_k)_k∈ such that x'_k∈ [x_0,x_k) for all k and x'_k→ y_m. In particular, for large enough k we have d_Ω(x'_k,y_m)<1. Using the triangle inequality, we obtaind_Ω(x_k,γ_m· x_0)≤d_Ω(x_k,x'_k) + d_Ω(x'_k,y_m) + d_Ω(y_m,γ_m· x_0) = d_Ω(x_k,x_0) - d_Ω(x_0,x'_k) + d_Ω(x'_k,y_m) + d_Ω(y_m,γ_m· x_0)≤d_Ω(x_k,x_0) - d_Ω(x_0,y_m) + 2 d_Ω(x'_k,y_m) + d_Ω(y_m,γ_m· x_0) < d_Ω(x_k,x_0) - (R+2) + 2 + R=d_Ω(x_k,x_0).This contradicts the fact that x_k∈𝒟.(<ref>) Apply (<ref>) with =_Ω(Γ).§ CONVEX SETS WITH BISATURATED BOUNDARY AND DUALITYIn this section we interpret convex cocompactness in terms of convex sets with bisaturated boundary (Definition <ref>), and use this to prove that convex cocompactness is stable under duality (property <ref> of Theorem <ref>).More precisely, in Sections <ref> and <ref> we establish the implications (<ref>) ⇒ (<ref>) and (<ref>) ⇒ (<ref>) of Theorem <ref>; we prove (Corollaries <ref> and <ref>) that when these conditions hold, sets Ω as in condition (<ref>) and _𝖻𝗂𝗌𝖺𝗍 as in condition (<ref>) may be chosen so that the full orbital limit set _Ω(Γ) of Γ in Ω coincides with the ideal boundary _𝖻𝗂𝗌𝖺𝗍 of _𝖻𝗂𝗌𝖺𝗍. In Section <ref> we study a notion of duality for properly convex sets which are not necessarily open. In Sections <ref> and <ref>, we prove Proposition <ref>, which is a more precise version of Theorem <ref>.<ref>. Finally, in Section <ref> we describe on which convex sets with bisaturated boundary the action of a given group is properly discontinuous and cocompact, when such sets exist. §.§ Bisaturated boundary for neighborhoods of the convex coreLet us prove the implication (<ref>) ⇒ (<ref>) of Theorem <ref>. We start with the following observation.Let Γ be a discrete subgroup of (V) and Ω a nonempty Γ-invariant properly convex open subset of (V). Let _0⊂_1 be two nonempty closed properly convex subsets of Ω on which Γ acts cocompactly. Suppose that _1=_0 and that _1 contains a neighborhood of _0 in Ω.Then _1 has bisaturated boundary.By cocompactness, there exists ε>0 such that any point of _1 is at d_Ω-distance ≥ε from _0. Let H be a supporting hyperplane of _1 and suppose for contradiction that H contains both a point x ∈_1 and a point z ∈_1. Since _1 is closed in (V) by Remark <ref>, we may assume without loss of generality that the interval [x,z) is contained in _1. Consider a sequence y_m ∈ [x,z) converging to z. Since Γ acts cocompactly on _1, there is a sequence (γ_m) ∈Γ^ such that γ_m· y_m remains in some compact subset of _1. Up to taking a subsequence, we may assume that γ_m· x→ x_∞ and γ_m· y_m→ y_∞ and γ_m· z→ z_∞ for some x_∞,y_∞,z_∞∈_1. We have x_∞∈_1 since the action of Γ on _1 is properly discontinuous, y_∞∈_1 since γ_m· y_m remains in some compact subset of _1, and z_∞∈_1 since _1 is closed. The point y_∞∈_1 belongs to the convex hull of {x_∞, z_∞}⊂_0, hence y_∞∈_0. This contradicts the fact that y_∞∈_1 is at distance ≥ε from _0. The implication (<ref>) ⇒ (<ref>) of Theorem <ref> is contained in the following consequence of Corollary <ref> and Lemma <ref>.Let Γ be an infinite discrete subgroup of (V) acting convex cocompactly (Definition <ref>) on some nonempty properly convex open subset Ω of (V). Letbe a closed convex subset of Ω on which Γ acts cocompactly and which contains a neighborhood of _Ω(Γ) (a closed uniform neighborhood of _Ω(Γ) in (Ω,d_Ω), see Lemma <ref>). Thenis a properly convex subset of (V) with bisaturated boundary on which Γ acts properly discontinuously and cocompactly. Moreover, = _Ω(Γ) = _Ω(Γ).Since the convex setis contained in Ω, it is properly convex and Γ acts properly discontinuously and cocompactly on it. By Corollary <ref>.(<ref>), we have =_Ω(Γ)=_Ω(Γ). By Lemma <ref> with _0=_Ω(Γ) and _1=, the sethas bisaturated boundary.§.§ Convex cocompact actions on the interior of convex sets with bisaturated boundaryIn this section we prove the implication (<ref>) ⇒ (<ref>) of Theorem <ref>. We first make the following general observations.Letbe a properly convex subset of (V) with bisaturated boundary. Assumeis not empty.Then * has nonempty interior ();*the convex hull ofin  is contained in () wheneveris closed in (V). (<ref>)If the interior of  were empty, thenwould be contained in a hyperplane. Sincehas bisaturated boundary,would be equal either to its ideal boundary (hence empty) or to its nonideal boundary (hence closed).(<ref>) Supposeis closed in (V), hence compact. Sincehas bisaturated boundary, every supporting hyperplane of  at a point x ∈ misses , hence by compactness ofthere is a hyperplane strictly separating x from(in an affine chart containing ). It follows that the convex hull ofin  does not meet , hence is contained in (). The implication (<ref>) ⇒ (<ref>) of Theorem <ref> is contained in the following consequence of Lemma <ref>, Corollary <ref>, and Lemma <ref>.Let Γ be an infinite discrete subgroup of (V) acting properly discontinuously and cocompactly on a nonempty properly convex subsetof (V) with bisaturated boundary. Then Ω:=() is nonempty and Γ acts convex cocompactly (Definition <ref>) on Ω. Moreover, _Ω(Γ) =, = Ω∖_Ω(Γ).Since Γ is infinite,is not closed in (V), and so Ω=() is nonempty by Lemma <ref>.(<ref>). The setis closed in (V) by Lemma <ref>, and so the convex hull _0 ofin  is closed in  and contained in Ω by Lemma <ref>.(<ref>). The action of Γ on _0 is still cocompact. Since Γ acts properly discontinuously on , the set _Ω(Γ) is contained in , and _Ω(Γ) is contained in _0. By Corollary <ref>.(<ref>), the group Γ acts convex cocompactly on Ω and _Ω(Γ)=_0=.§.§ The dual of a properly convex setGiven an open properly convex set Ω⊂(V), there is a notion of dual convex set Ω^* ⊂(V^*) which is very useful in the study of divisible convex sets: see Section <ref>. We generalize this notion here to properly convex sets with possibly nonempty nonideal boundary.Letbe a properly convex subset of (V) with nonempty interior, but not necessarily open nor closed. The dual ^*⊂(V^*) of  is the set of elements of (V^*) which, viewed as projective hyperplanes in (V), do not meet ()∪, those hyperplanes meetin a (possibly empty) subset of . The set ^* is convex in (V^*). Indeed, ^* = ()^* implies that ^* is convex, and if H,H”,H'∈^* are three distinct points aligned in this order with H,H'∈⁠^*, and if we view them as projective hyperplanes in (V), then H”∩⊂ (H∩ H') ∩⊂, hence H”∈^*.By construction, * (^*) is the set of projective hyperplanes in (V) that miss , () and (^*) are dual in the usual sense for open convex sets, as in (<ref>);* ^* is the set of projective hyperplanes in (V) whose intersection with  is a nonempty subset of(such a hyperplane missesifhas bisaturated boundary);* ^* is the set of supporting projective hyperplanes ofat points of(such a hyperplane missesifhas bisaturated boundary).Letbe a properly convex subset of (V), not necessarily open nor closed, but with bisaturated boundary and with nonempty interior. Then *the dual ^* has bisaturated boundary;*the bidual (^*)^* coincides with  (after identifying (V^*)^* with V); *the dual ^* has a PET (properly embedded triangle, Definition <ref>) if and only ifdoes. (<ref>) By Remark <ref>, sincehas bisaturated boundary, ^* (^*) is the set of projective hyperplanes in (V) whose intersection with  is a nonempty subset of(). In particular, a point of ^* and a point of ^*, seen as projective hyperplanes of (V), can only meet outside of . This means exactly that a supporting hyperplane of ^* in (V^*) cannot meet both ^* and ^*.(<ref>) By definition, (^*)^* is the set of hyperplanes of (V^*) missing Int(^*) ∪^*. Viewing hyperplanes of (V^*) as points of (V), by Remark <ref> the set (^*)^* consists of those points of (V) not belonging to any hyperplane that misses , or any supporting hyperplane at a point of . Sincehas bisaturated boundary, this is ∖, namely .(<ref>) By (<ref>) and (<ref>), it is enough to prove one implication. Supposehas a PET contained in a two-dimensional projective plane P, ∩ P = T is an open triangle. Let H_1, H_2, H_3 be projective hyperplanes of (V) supporting  and containing the edges E_1, E_2, E_3 ⊂ of T. For 1≤ k≤ 3, the supporting hyperplane H_k intersects , hence lies in the ideal boundary ^* of the dual convex set. Since ^* has bisaturated boundary by (<ref>), the whole edge [H_k, H_k'] ⊂^* is contained in ^* for 1≤ k<k'≤ 3. Hence H_1, H_2, H_3 span a 2-plane Q ⊂(V^*) whose intersection with ^* is a PET of ^*. §.§ Proper and cocompact actions on the dualThe following is the key ingredient in Theorem <ref>.<ref>.Let Γ be a discrete subgroup of (V) anda Γ-invariant convex subset of (V) with bisaturated boundary. Supposehas nonempty interior, so that the dual ^* is well defined. Then the action of Γ on  is properly discontinuous and cocompact if and only if the action of Γ on ^* is. Recall from Corollary <ref> that if Γ acts properly discontinuously and cocompactly on , thenautomatically has nonempty interior.In order to prove Proposition <ref>, we assume n=(V)≥ 2 and first make some definitions. Consider a properly convex open subset Ω in (V). For any distinct points x,y ∈(V) ∖∂Ω, at least one of which is in Ω, the line L through x and y intersects ∂Ω in two points a,b and we setδ_Ω(x,y):= max{axyb, bxya}.If x,y ∈Ω, then δ(x,y) = exp (2 d_Ω(x,y)) > 1 where d_Ω is the Hilbert metric on Ω (see Section <ref>). However, if x ∈Ω and y ∈(V) ∖Ω, then we have -1 ≤δ_Ω(x,y) < 0. For any point x ∈Ω and any projective hyperplane H ∈Ω^* (disjoint from Ω), we setδ_Ω(x,H):= max_y∈ Hδ_Ω(x,y).Then δ(x,H) ∈ [-1,0) is close to 0 when H is “close” to ∂Ω as seen from x. Let Ω be a nonempty properly convex open subset of (V)=(^n). For any H ∈Ω^*, there exists x ∈Ω such thatδ_Ω(x,H) ≤-1/n-1.This lemma is classical: see <cit.>, which gives a proof using Helly's theorem. The original result goes back to Radon <cit.>. We include a proof for convenience. Fix H∈Ω^* and consider an affine chart ^n-1 of (V) for which H is at infinity, endowed with a Euclidean norm ‖·‖. We take for x the center of mass of Ω in this affine chart with respect to the Lebesgue measure. It is enough to show that if a,b∈∂Ω satisfy x∈ [a,b], then ‖ x-a ‖/‖ b-a ‖≤ (n-1)/n. Up to translation, we may assume a=0∈^n-1. Let φ be a linear form on ^n-1 such that φ(b)=1=sup_Ωφ. Let h:=φ(x), so that ‖ x-a ‖/‖ b-a‖=h, and let Ω':=_>0· (Ω∩φ^-1(h)) ∩φ^-1((-∞,1)) (see Figure <ref>).The average value 𝔼_Ω(φ) of φ on Ω, for the Lebesgue measure, is 𝔼_Ω(φ)=φ(x)=h, since x is the center of mass of Ω. Moreover, 𝔼_Ω'(φ) ≥𝔼_Ω(φ) since by convexityΩ' ∩φ^-1((-∞, h])⊂ Ω∩φ^-1((-∞, h]), Ω' ∩φ^-1([h,1))⊃ Ω∩φ^-1([h,1)).But 𝔼_Ω'(φ)=(n-1)/n since Ω' is a truncated open cone in ^n-1. Thus, h≤ (n-1)/n.By Lemma <ref>, the set ^* has bisaturated boundary and (^*)^* =. Thus it is enough to prove that if the action onis properly discontinuous and cocompact, then so is the action on ^*.Let us begin with properness. Recall from Lemma <ref> that the setis closed in (V). Let _0 be the convex hull ofin . Note that any supporting hyperplane of _0 contains a point of : otherwise it would be separated fromby a hyperplane, contradicting the definition of _0. By Lemma <ref>.(<ref>), we have _0 ⊂Ω := (). Let _1 be the closed uniform 1-neighborhood of _0 in (Ω,d_Ω). It is properly convex <cit.> (see Lemma <ref>), with nonempty interior, and _1 ∩ = ∅.Taking the dual, we obtain that (_1)^* is a Γ-invariant properly convex open set containing ^*. In particular, the action of Γ on ^*⊂(_1)^* is properly discontinuous (see Section <ref>).Letus show that the action of Γ on ^* is cocompact. Let Ω =() and Ω^* =(^*).Let 𝒟⊂ be a compact fundamental domain for the action of Γ on . Consider the following subset of ^*, where n=(V):𝒰^* = ⋃_x∈𝒟∩Ω{ H ∈Ω^*  | δ_Ω(x, H) ≤-1/n-1}.It follows from Lemma <ref> that Γ·𝒰^* = Ω^*.We claim that 𝒰^*⊂^*. To see this, suppose a sequence of elements H_m ∈𝒰^* converges to some H ∈𝒰^*; let us show that H ∈^*. If H ∈Ω^* there is nothing to prove, so we may assume that H ∈∂Ω^* = (^*) is a supporting hyperplane ofat a point y∈(). For every m, let x_m ∈𝒟∩Ω satisfy δ_Ω(x_m, H_m) ≤-1/n-1.Up to passing to a subsequence, we may assume x_m → x ∈𝒟⊂. Let y_m ∈ H_m such that y_m → y. For every m, let a_m,b_m∈∂Ω be such that a_m, x_m, b_m, y_m are aligned in this order. Then-1/n-1≥δ_Ω(x_m, y_m) ≥b_mx_my_ma_m≥ - ‖ b_m - y_m‖/‖ b_m - x_m‖,where · is a fixed Euclidean norm on an affine chart containing Ω. Since ‖ b_m - y_m‖→ 0, we deduce ‖ b_m - x_m‖→ 0, and so x = y ∈∩ H ⊂. Sincehas bisaturated boundary, we must have H ∈^*, by definition of ^*.Therefore 𝒰^*⊂^*.Since 𝒰^* is compact and the action on ^* is properly discontinuous, the fact that Γ·𝒰^*=Ω^* yields Γ·𝒰^*=^*.§.§ Proof of Theorem <ref>.<ref>We establish the following more precise result, which implies Theorem <ref>.<ref>.Let Γ be an infinite discrete subgroup of (V) which is convex cocompact in (V). Then there exists a nonempty properly convex open subset Ω of (V) such that Γ acts convex cocompactly on both Ω and Ω^*. Such sets Ω are exactly the interiors of the nonempty properly convex subsets of (V) with bisaturated boundary on which Γ acts properly discontinuously and cocompactly.Letbe a nonempty properly convex subset of (V) with bisaturated boundary on which Γ acts properly discontinuously and cocompactly: such aexists by Corollary <ref>. By Lemma <ref>.(<ref>), the dual ^* ⊂(V^*) has bisaturated boundary, and Γ acts properly discontinuously and cocompactly on ^* by Proposition <ref>. By construction (see Remark <ref>), the dual Ω^* of Ω := () satisfies Ω^* = (^*). By Corollary <ref>, the set Ω (Ω^*) is a nonempty properly convex open subset of (V) ((V^*)) on which Γ acts convex cocompactly, and = Ω∖_Ω(Γ) (^* = Ω^*∖_Ω^*(Γ)).Conversely, let Ω be a nonempty properly convex open subset of (V) such that the action of Γ on both Ω and Ω^* is convex cocompact. Let us check that := Ω∖_Ω(Γ) is a properly convex set with bisaturated boundary on which Γ acts properly discontinuously and cocompactly. Let '^* be a closed uniform neighborhood of _Ω^*(Γ) in (Ω^*,d_Ω^*). By Corollary <ref>, the set '^* is a properly convex subset of (V^*) with bisaturated boundary on which Γ acts properly discontinuously and cocompactly. By Lemma <ref>.(<ref>) and Proposition <ref>, the dual ' := ('^*)^* is a properly convex subset of (V) with bisaturated boundary on which Γ acts properly discontinuously and cocompactly. By Corollary <ref> and Proposition <ref>, the group Γ acts convex cocompactly on Ω' := (') and ' = _Ω'(Γ) = _Ω(Γ) =. In particular,is contained and closed in ', hence Γ also acts properly discontinuously and cocompactly on . Since ^* ∩'^* = ∅, Remark <ref> implies that ∩' = ∅, and sois contained Ω' = ('). Moreover, Ω is a neighborhood in Ω' of _Ω'(Γ) = _Ω(Γ). Sincecontains Ω, Corollary <ref> implies thathas bisaturated boundary.§.§ On which convex sets with bisaturated boundary is the action properly discontinuous and cocompact?Here is a consequence of Corollaries <ref>, <ref>, and <ref>.Let Γ be an infinite discrete subgroup of (V) acting convex cocompactly on a nonempty properly convex open subset Ω of (V). Suppose that Γ contains a proximal element, so that the maximal convex open set Ω_max⊃Ω of Proposition <ref> is well defined, and suppose that Ω_max is properly convex (this is always the case if Γ acts irreducibly on (V), see Fact <ref>). Then the nonempty properly convex subsets of Ω_max with bisaturated boundary on which Γ acts properly discontinuously and cocompactly are exactly the closed convex subsets of Ω_max that are nested between two open uniform neighborhoods of _Ω_max(Γ) in (Ω_max,d_Ω_max).We first observe that, by Corollary <ref>.(<ref>), the action of Γ on Ω_max is convex cocompact.Ifis a closed convex subset of Ω_max that is nested between two open uniform neighborhoods of _Ω_max(Γ) in (Ω_max,d_Ω_max), then Corollary <ref> yields thatis a properly convex set with bisaturated boundary on which Γ acts properly discontinuously and cocompactly.Conversely, letbe a nonempty properly convex subset of Ω_max with bisaturated boundary on which Γ acts properly discontinuously and cocompactly.First, we show thatis contained in Ω_max. For this we observe that by Lemma <ref>.(<ref>) and Proposition <ref>, the dual ^* is a properly convex subset of (V^*) with bisaturated boundary on which Γ acts properly discontinuously and cocompactly. In particular, ^* has nonempty interior by Lemma <ref>.(<ref>), and so Λ^*_Γ⊂_(^*)(Γ) ⊂^* by Lemma <ref>. In other words, by Remark <ref>, any element of Λ^*_Γ, seen as a hyperplane in (V), is a supporting hyperplane to Ω_max at a point of , hence missessincehas bisaturated boundary. From this and from the definition of Ω_max (see Proposition <ref>), we deduce that ∂Ω_max∩ = ∅, and sois contained in Ω_max.By Corollary <ref>, the properly convex open set Ω':=() is nonempty and Γ acts convex cocompactly on it. By Corollary <ref>.(<ref>), the set Ω' contains _Ω_max, and socontains a neighborhood of _Ω_max in Ω_max.By considering a compact fundamental domain for the action of Γ on _Ω_max, we see that Ω' (hence ) actually contains an open uniform neighborhood of _Ω_max(Γ) in (Ω_max,d_Ω_max), and thatis contained in an open uniform neighborhood of _Ω_max(Γ) in (Ω_max,d_Ω_max).§ SEGMENTS IN THE FULL ORBITAL LIMIT SETIn this section we establish the equivalences <ref> ⇔ <ref> ⇔ <ref> ⇔ <ref> in Theorem <ref>. For this it will be helpful to introduce two intermediate conditions, weaker than <ref> but stronger than <ref>:*Γ is convex cocompact in (V) and for some nonempty properly convex open set Ω on which Γ acts convex cocompactly, _Ω(Γ) does not contain a nontrivial projective line segment; *Γ is convex cocompact in (V) and for any nonempty properly convex open set Ω on which Γ acts convex cocompactly, _Ω(Γ) does not contain a PET.The implications <ref> ⇒ <ref>, <ref> ⇒ <ref>, <ref> ⇒ <ref>, and <ref> ⇒ <ref> are trivial. The implication <ref> ⇒ <ref> holds by Lemma <ref> below. In Section <ref> we simultaneously prove <ref> ⇒ <ref> and <ref> ⇒ <ref>. In Section <ref> we prove <ref> ⇒ <ref>. In Section <ref> we prove <ref> ⇔ <ref>. We include the following diagram of implications for the reader's convenience:*+[F]<ref>:for any cc Ω, _Ω⊅ segment@=>[rr]@=>[dd]*+[F]<ref>:for some cc Ω, _Ω⊅ segment@==>[dl]^ <ref>@=>[dd] @<==>@/^24pt/[dr]^ <ref>*+[F]<ref>: Γ cc and Γ hyperbolic@==>[dl]^ <ref> *+[F]<ref>: ∃Ω⊃cocompact, nonempty interior ⊅ segment+[F]<ref>:for any cc Ω, _Ω⊅ PET@==>@/^16pt/[uu]^ <ref>@=>[rr] *+[F]<ref>:for some cc Ω, _Ω⊅ PET@==>@/_16pt/[uu]_ <ref>§.§ PETs obstruct hyperbolicityWe start with an elementary remark.Let Γ be an infinite discrete subgroup of (V) preserving a properly convex open subset Ω and acting cocompactly on a closed convex subsetof Ω. Ifcontains a PET (Definition <ref>), then Γ is not word hyperbolic.The classical Švarc–Milnor lemma states that a finitely generated group is quasi-isometric to any proper, geodesic metric space on which it acts properly and cocompactly by isometries. Hence Γ is quasi-isometric to the metric space (, d_Ω). A PET in Ω is totally geodesic for the Hilbert metric d_Ω and is quasi-isometric to the Euclidean plane. Thus, ifcontains a PET, then the metric space (, d_Ω) is not Gromov hyperbolic and therefore Γ is not word hyperbolic.§.§ Segments in the full orbital limit set yield PETs in the convex core The implications <ref> ⇒ <ref> and <ref> ⇒ <ref> in Theorem <ref> will be a consequence of the following lemma, which is similar to <cit.> but without the divisibility assumption nor the restriction to dimension 3; our proof is different, close to <cit.>.Let Γ be an infinite discrete subgroup of (V). Let Ω be a nonempty Γ-invariant properly convex open subset of (V). Suppose Γ acts cocompactly on some nonempty closed convex subsetof Ω. Ifcontains a nontrivial segment which is inextendable in ∂Ω, thencontains a PET.The ideal boundaryis closed in (V) by Remark <ref>. Supposecontains a nontrivial segment [a,b] which is inextendable in ∂Ω. Let c ∈ and consider a sequence of points x_m∈ lying inside the open triangle with vertices a,b,c and converging to a point x ∈ (a,b).We claim that the d_Ω-distance from x_m to either projective interval (a,c] or (b,c] tends to infinity with m. Indeed, consider a sequence (y_m)_m of points of (a,c] converging to y ∈ [a,c] and let us check that d_Ω(x_m,y_m) → +∞ (the proof for (b,c] is the same). If y ∈ (a,c], then y∈ and so d_Ω(x_m,y_m) → +∞ by properness of the Hilbert metric. Otherwise y = a. In that case, for each m, consider x'_m, y'_m ∈∂Ω such that x'_m, x_m, y_m, z'_m are aligned in that order. Up to taking a subsequence, we may assume x'_m → x' and z'_m → z' for some x',z' ∈∂Ω, with x', x, a, z' aligned in that order. By inextendability of [a,b] in ∂Ω, we must have z = a = z', hence d_Ω(x_m,z_m) → +∞ in this case as well, proving the claim.Since the action of Γ on  is cocompact, there is a sequence (γ_m)∈Γ^ such that γ_m· x_m remains in a fixed compact subset of . Up to passing to a subsequence, we may assume that (γ_m· x_m)_m converges to some x_∞∈, and (γ_m· a)_m and (γ_m· b)_m and (γ_m· c)_m converge respectively to some a_∞, b_∞, c_∞∈, with [a_∞,b_∞]⊂ sinceis closed (Remark <ref>). The triangle with vertices a_∞, b_∞, c_∞ is nondegenerate since it contains x_∞∈⁠. Further, x_∞ is infinitely far (for the Hilbert metric on Ω) from the edges [a_∞, c_∞] and [b_∞, c_∞], and so these edges are fully contained in . Thus the triangle with vertices a_∞, b_∞, c_∞ is a PET of .We prove the contrapositive. Suppose Γ⊂(V) acts convex cocompactly on the properly convex open set Ω⊂(V) and _Ω(Γ) contains a nontrivial segment. By Corollary <ref>.(<ref>), the set _Ω(Γ) is equal to _Ω(Γ) and closed in (V), and it contains the open face of ∂Ω at any interior point of that segment; in particular, _Ω(Γ) contains a nontrivial segment which is inextendable in ∂Ω. By Lemma <ref> with =_Ω(Γ), the set _Ω(Γ) contains a PET.§.§ Word hyperbolicity of the group in the absence of segments In this section we prove the implication <ref> ⇒ <ref> in Theorem <ref>. We proceed exactly as in <cit.>, with arguments inspired from <cit.>.Recall that any geodesic ray of (,d_Ω) has a well-defined endpoint in(see <cit.> or <cit.>). It is sufficient to apply the following general result to =_Ω.Let Γ be a discrete subgroup of (V). Let Ω be a nonempty Γ-invariant properly convex open subset of (V), with Hilbert metric d_Ω. Suppose Γ acts cocompactly on some nonempty closed convex subsetof Ω such thatdoes not contain any nontrivial projective line segment. Then *there exists R>0 such that any geodesic ray of (,d_Ω) lies at Hausdorff distance ≤ R from the projective interval with the same endpoints;*the metric space (,d_Ω) is Gromov hyperbolic with Gromov boundary Γ-equivariantly homeomorphic to ;*the group Γ is word hyperbolic and any orbit map Γ→ (,d_Ω) is a quasi-isometric embedding which extends to a Γ-equivariant homeomorphism ξ : ∂_∞Γ→;*ifcontains _Ω(Γ) (hence Γ acts convex cocompactly on Ω, see Corollary <ref>.(<ref>)), then any orbit map Γ→ (Ω, d_Ω) is a quasi-isometric embedding and extends to a Γ-equivariant homeomorphism ξ: ∂_∞Γ→_Ω(Γ) = which is independent of the orbit. (<ref>) Suppose by contradiction that for any m∈ there is a geodesic ray 𝒢_m of (,d_Ω) with endpoints a_m∈ and b_m∈ and a point y_m∈ on that geodesic which lies at distance ≥ m from the projective interval [a_m,b_m).By cocompactness of the action of Γ on , for any m∈ there exists γ_m∈Γ such that γ_m· y_m belongs to a fixed compact set of . Up to taking a subsequence, (γ_m· y_m)_m converges to some y_∞∈, and (γ_m· a_m)_m and (γ_m·⁠ b_m)_m converge respectively to some a_∞∈ and b_∞∈. Since the distance from y_m to [a_m,b_m) goes to infinity, we have [a_∞,b_∞]⊂, hence a_∞=b_∞ sincedoes not contain a segment. Therefore, up to extracting, the geodesics 𝒢_m converge to a biinfinite geodesic of (Ω,d_Ω) with both endpoints equal. But such a geodesic does not exist (see <cit.> or <cit.>): contradiction.(<ref>) Suppose by contradiction that triangles of (,d_Ω) are not uniformly thin. By (<ref>), triangles of (,d_Ω) whose sides are projective line segments are not uniformly thin: namely, there exist a_m,b_m,c_m∈ and y_m∈ [a_m,b_m] such thatd_Ω(y_m, [a_m,c_m]∪ [c_m,b_m]) m→ +∞⟶ +∞.By cocompactness, for any m there exists γ_m∈Γ such that γ_m· y_m belongs to a fixed compact set of . Up to taking a subsequence, (γ_m· y_m)_m converges to some y_∞∈, and (γ_m· a_m)_m and (γ_m· b_m)_m and (γ_m·⁠ c_m)_m converge respectively to some a_∞,b_∞,c_∞∈. By (<ref>) we have [a_∞,c_∞]∪ [c_∞,b_∞]⊂, hence a_∞=b_∞=c_∞ sincedoes not contain any nontrivial projective line segment. This contradicts the fact that y_∞∈ (a_∞,b_∞). Therefore (,d_Ω) is Gromov hyperbolic.Fix a basepoint y∈. The Gromov boundary of (,d_Ω) is the set of equivalence classes of infinite geodesic rays in  starting at y, for the equivalence relation “to remain at bounded distance for d_Ω”. Consider the Γ-equivariant continuous map fromto this Gromov boundary sending z∈ to the class of the geodesic ray from y to z. This map is clearly surjective, since any infinite geodesic ray interminates at the ideal boundary . Moreover, it is injective, since the nonexistence of line segments inmeans that no two points oflie in a common face of ∂Ω, hence the Hilbert distance between rays going out to two different points ofgoes to infinity (see Lemma <ref>.(<ref>)).(<ref>) The group Γ acts properly discontinuously and cocompactly, by isometries, on the proper, geodesic metric space (,d_Ω), which is Gromov hyperbolic with Gromov boundary  by (<ref>) above. We apply the Švarc–Milnor lemma.(<ref>) Supposecontains _Ω(Γ). Consider a point y ∈Ω. It lies in a closed uniform neighborhood _y ofin (Ω,d_Ω), which is a properly convex subset of Ω on which the action of Γ is also cocompact (Lemma <ref>). By Corollary <ref>.(<ref>), we have _y == _Ω(Γ), and this set does not contain any nontrivial segment by assumption. By (<ref>), the orbit map Γ→ (_y,d_Ω) associated with y is a quasi-isometry which extends to a Γ-equivariant homeomorphism ∂_∞Γ→_y =. This extension is independent of y since _y = does not contain any nontrivial segment (argue as in the proof of Lemma <ref>).§.§ Absence of segments inand in _Ω(Γ)In this section we prove the equivalence <ref> ⇔ <ref>. Suppose that Γ acts convex cocompactly on some nonempty properly convex open subset Ω of (V) and that _Ω(Γ) does not contain any nontrivial projective line segment. The group Γ acts cocompactly on the closed uniform 1-neighborhoodof _Ω(Γ) in (Ω,d_Ω), which is properly convex with nonempty interior (Lemma <ref>). By Corollary <ref>.(<ref>), we have =_Ω(Γ). The proof of <ref> ⇒ <ref> relies on the following lemma, which is similar to <cit.>.Let Γ be an infinite discrete subgroup of (V) preserving a properly convex open subset Ω of (V) and acting cocompactly on some closed convex subsetof Ω with nonempty interior. Ifdoes not contain any nontrivial projective segment, then _Ω(Γ)⊂ (hence _Ω(Γ) ⊂).Suppose thatdoes not contain any nontrivial projective segment. Let us show that any point z_∞ = lim_m γ_m· z ∈_Ω(Γ), where (γ_m)∈Γ^ and z ∈Ω, belongs to . For this, consider two distinct points x, y ∈() and b ∈∂Ω with x,y,z,b aligned in this order. Up to passing to a subsequence, we may assume that for any w ∈{ x,y,b}, the sequence (γ_m· w)_m∈ converges to some w_∞∈∂Ω, with x_∞, y_∞, z_∞, b_∞ aligned in this order (not necessarily distinct). We have [x_∞,y_∞]⊂, hence x_∞ = y_∞ sincedoes not contain any nontrivial segment. Since γ_m· xγ_m· yγ_m· zγ_m· b = xyzb∈ (1,+∞) for all m, and all segments [γ_m · x, γ_m · b] and [γ_∞· x, γ_∞· b] live in an affine chart containing Ω, we must have x_∞ = y_∞ = z_∞. In particular, z_∞∈.If Γ preserves a properly convex open subset Ω of (V) and acts cocompactly on some closed convex subsetof Ω with nonempty interior such thatdoes not contain any nontrivial projective line segment, thencontains _Ω(Γ) by Lemma <ref>, and so Corollary <ref>.(<ref>) implies that Γ acts convex cocompactly on Ω and that _Ω(Γ) = does not contain any nontrivial projective line segment.§ CONVEX COCOMPACTNESS AND NO SEGMENT IMPLIES P_1-ANOSOVIn this section we continue with the proof of Theorem <ref>. We have already established the equivalences <ref> ⇔ <ref> ⇔ <ref> ⇔ <ref> in Section <ref>. On the other hand, the implication <ref> ⇒ <ref> is trivial.We now prove the implication <ref> ⇒ <ref> in Theorem <ref>. By the above, this yields the implication <ref> ⇒ <ref> in Theorem <ref>, which is also the implication (<ref>) ⇒ (<ref>) in Theorem <ref>. We build on Lemma <ref>.(<ref>). §.§ Compatible, transverse, dynamics-preserving boundary maps Let Γ be an infinite discrete subgroup of (V). Suppose Γ is word hyperbolic and convex cocompact in (V). By Proposition <ref>, there is a nonempty Γ-invariant properly convex open subset Ω of (V) such that the actions of Γ on Ω and on its dual Ω^* are both convex cocompact. Our goal is to show that the natural inclusion Γ↪(V) is P_1-Anosov.By the implication <ref> ⇒ <ref> in Theorem <ref> (which we have proved in Section <ref>) and Theorem <ref>.<ref> (which we have proved in Section <ref>), the full orbital limit sets _Ω(Γ)⊂(V) and _Ω^*(Γ)⊂(V^*) do not contain any nontrivial projective line segment. Let _Ω⊂(V) (_Ω^*⊂(V^*)) be the convex hull of _Ω(Γ) in Ω (of _Ω^*(Γ) in Ω^*). By Lemma <ref>.(<ref>), any orbit map Γ→ (_Ω,d_Ω) (Γ→ (_Ω^*,d_Ω^*)) is a quasi-isometry which extends to a Γ-equivariant homeomorphismξ : ∂_∞Γ⟶_Ω(Γ)⊂(V)(resp.  ξ^*: ∂_∞Γ⟶_Ω^*(Γ) ⊂(V^*)).We see (V^*) as the space of projective hyperplanes of (V). The boundary maps ξ and ξ^* are compatible, for any η∈∂_∞Γ we have ξ(η) ∈ξ^*(η); more precisely, ξ^*(η) is a supporting hyperplane of Ω at ξ(η).Let (γ_m)_m∈ be a quasi-geodesic ray in Γ with limit η∈∂_∞Γ. For any x ∈Ω and any H ∈Ω^*, we have γ_m· x →ξ(η) and γ_m · H →ξ^*(η). Lift x to a vector v ∈ V and lift H to a linear form φ∈ V^*. Lift the sequence γ_m to a sequence γ̂_m ∈^± (V). By Lemma <ref>, the sequences (γ̂_m· v)_m∈ and (γ̂_m·φ)_m∈ go to infinity in V and V^* respectively. Since (γ̂_m ·φ)(γ̂_m · v) = φ(v) is independent of m, we obtain ξ(η)∈ξ^*(η) by passing to the limit. Since ξ^*(η) belongs to ∂Ω^*, it is a supporting hyperplane of Ω.The maps ξ and ξ^* are transverse, for any η≠η' in ∂_∞Γ we have ξ(η) ∉ξ^*(η').Consider η,η'∈∂_∞Γ such that ξ(η) ∈ξ^*(η'). Let us check that η=η'. By Corollary <ref>.(<ref>), we have _Ω(Γ)=_Ω(Γ), and so the projective line segment [ξ(η),ξ(η')], contained in the supporting hyperplane ξ^*(η'), is contained in _Ω(Γ). Since _Ω(Γ) contains no nontrivial projective line segment by condition <ref> of Theorem <ref>, we deduce η=η'. For any γ∈Γ of infinite order, we denote by η_γ^+ (η_γ^-) the attracting (repelling) fixed point of γ in ∂_∞Γ.The maps ξ and ξ^* are dynamics-preserving.We only prove it for ξ; the argument is similar for ξ^*. We fix a norm ‖·‖_ V on V. Let γ∈⁠Γ be an element of infinite order; we lift it to an element γ̂∈^±(V) that preserves a properly convex cone of V∖{0} lifting Ω. Let L^+ be the line of V corresponding to ξ(η_γ^+) and H^- the hyperplane of V corresponding to ξ^*(η_γ^-). By transversality, we have V = L^+ ⊕ H^-, and this decomposition is preserved by γ̂. Let [v] ∈Ω and write v = ℓ^+ + h^- with ℓ^+ ∈ L^+ and h^- ∈ H^-. Since Ω is open, we may choose v so that ℓ^+ ≠ 0 and h^- satisfies ‖γ̂^m· h^- ‖_ V≥δ t^m ‖ h^- ‖_ V > 0 for all m∈, where δ > 0 and where t > 0 is the spectral radius of the restriction of γ̂ to H^-. On the other hand, γ̂^m ·ℓ^+ = s^m ℓ^+ where s is the eigenvalue of γ̂ on L^+. By Lemma <ref>.(<ref>), we have γ̂^m· [v]→ξ(η_γ^+) as m→ +∞, hence‖γ̂^m· h^-‖_ V/‖γ̂^m·ℓ^+‖_ Vm→ +∞⟶ 0.Necessarily s>t, and so ξ(η_γ^+) is an attracting fixed point for the action of γ on (V). As an immediate consequence of Lemmas <ref>.(<ref>) and <ref>, we obtain the following.For any infinite-order element γ∈Γ, the element ρ(γ)∈(V) is proximal in (V), and the full orbital limit set _Ω(Γ) is equal to the proximal limit set Λ_Γ (see Definition <ref>).§.§ The natural inclusion Γ↪(V) is P_1-Anosov We have (μ_1-μ_2)(γ)→ +∞ as γ→∞ in Γ.Consider a sequence (γ_m)∈Γ^ going to infinity in Γ. Up to extracting we can assume that there exists η∈∂_∞Γ such that γ_m→η. Then γ_m· x→ξ(η) for all x∈Ω by Lemma <ref>.(<ref>). For any m we can write γ_m = k_m a_m k'_m ∈ K exp(𝔞^+) K where a_m=diag(a_i,m)_1≤ i≤ n with a_i,m≥ a_i+1,m (see Section <ref>). Up to extracting, we may assume that (k_m)_m∈ converges to some k∈ K and (k'_m)_m∈ to some k'∈ K. Let (e_1,…,e_n) be the standard basis of V = ^n, orthonormal for the inner product preserved by K̂ = (n). Since Ω is open, we can find points x,y of Ω lifting respectively to v,w∈ V withk'· v = ∑_i=1^n s_i e_iandk'· w = ∑_i=1^n t_i e_isuch that s_1=t_1 but s_2≠ t_2. For any m, let us writek'_m· v = ∑_i=1^n s_i,m e_iandk'_m· w = ∑_i=1^n t_i,m e_iwhere s_i,m→ s_i and t_i,m→ t_i. Thena_m k'_m· x = [s_1,m e_1 + ∑_i=2^n a_i,m/a_1,ms_i,m e_i],a_m k'_m· y = [t_1,m e_1 + ∑_i=2^n a_i,m/a_1,mt_i,m e_i].The sequences (γ_m· x)_m∈ and (γ_m· y)_m∈ both converge to ξ(η). Hence the sequences (a_mk'_m·⁠ x)_m∈ and (a_mk'_m· y)_m∈ both converge to the same point k^-1·ξ(η)∈(V). Since s_1=t_1 and s_2≠ t_2, we must have a_2,m/a_1,m→ 0, (μ_1-μ_2)(γ_m)→ +∞. By Fact <ref>, the natural inclusion Γ↪(V) is P_1-Anosov. This completes the proof of the implication <ref> ⇒ <ref> of Theorem <ref>. § P_1-ANOSOV IMPLIES CONVEX COCOMPACTNESS In this section we prove the implications <ref> ⇔ <ref> ⇒ <ref> in Theorem <ref>. Let Γ be an infinite discrete subgroup of (V). Suppose that Γ is word hyperbolic, that the inclusion Γ↪(V) is P_1-Anosov, and that Γ preserves some nonempty properly convex open subset 𝒪 of (V). Then the proximal limit set Λ_Γ^* of Γ in (V^*) is nonempty and, by Proposition <ref>, it lifts to a cone Λ_Γ^* of V^*∖{0} such that the convex open setΩ_max := ({ v∈ V  | φ(v)>0∀φ∈Λ_Γ^*})contains 𝒪 and is Γ-invariant. The set Ω_max is a connected component of (V) ∖⋃_H∈Λ_Γ^* H which contains 𝒪. The implications <ref> ⇒ <ref> and<ref> ⇒ <ref> are contained in the next proposition.Let Γ be an infinite discrete subgroup of (V) which is word hyperbolic, such that the natural inclusion Γ↪(V) is P_1-Anosov, with boundary maps ξ : ∂_∞Γ→(V) and ξ^* : ∂_∞Γ→(V^*). Suppose that the proximal limit set Λ_Γ^*=ξ^*(∂_∞Γ) lifts to a cone Λ_Γ^* of V^*∖{0} such that the convex open setΩ_max := ({ v∈ V  | φ(v)>0∀φ∈Λ_Γ^*})is nonempty and Γ-invariant. Then Γ acts convex cocompactly (Definition <ref>) on some nonempty properly convex open set Ω⊂Ω_max (which can be taken to be Ω_max if Ω_max is properly convex). Moreover, the full orbital limit set _Ω(Γ) for any such Ω is equal to the proximal limit set Λ_Γ=ξ(∂_∞Γ). The rest of the section is devoted to the proof of Proposition <ref>. §.§ Convergence for Anosov representations We first make the following general observation.Let Γ be an infinite discrete subgroup of (V) which is word hyperbolic, such that the natural inclusion Γ↪(V) is P_1-Anosov, with boundary maps ξ : ∂_∞Γ→(V) and ξ^* : ∂_∞Γ→(V^*). Let (γ_m)_m∈ be a sequence of elements of Γ converging to some η∈∂_∞Γ, such that (γ_m^-1)_m∈ converges to some η'∈∂_∞Γ. Then *for any x∈(V) with x∉ξ^*(η') we have γ_m· x→ξ(η);*for any x^*∈(V^*) with ξ(η')∉ x^* we have γ_m· x^*→ξ^*(η).Moreover, convergence is uniform for x, x^* ranging over compact sets.The two statements are dual to each other, so we only need to prove (<ref>). For any m∈, we write γ_m = k_m a_m k'_m ∈ Kexp(𝔞^+)K (see Section <ref>). Up to extracting, (k_m) converges to some k∈ K, and (k'_m) converges to some k'∈ K. Note thatγ_m^-1 = (k'_m^-1 w_0) (w_0 a_m^-1 w_0) (w_0 k_m^-1) ∈ KA^+K,where w_0∈(V)≃(^n) is the image of the permutation matrix exchanging e_i and e_n+1-i for all 1≤ i ≤ n. Therefore, by Fact <ref>, we haveξ^*(η') = k'^-1w_0·(span(e_1,…,e_n-1)).In particular, for any x∈(V) with x∉ξ^*(η') we havek'· x ∉ w_0·(span(e_1,…,e_n-1)) = (span(e_2,…,e_n)).On the other hand, writing a_m = diag(a_m,i)_1≤ i≤ n, we have a_m,1/a_m,2=e^(μ_1-μ_2)(γ_m)→ +∞ by Fact <ref>. Therefore a_mk'_m· x→ [e_1], and so γ_m· x=k_ma_mk'_m· x→ k· [e_1]. By Fact <ref>, we have k· [e_1]=ξ(η). This proves (<ref>).Uniformity follows from the fact that k'· x=[e_1 + ∑_i=2^n t_i e_i] with bounded t_i∈, when x ranges over a compact set disjoint from the hyperplane ξ^*(η'). In the setting of Lemma <ref>, for any nonempty Γ-invariant properly convex open subset Ω of (V), the full orbital limit set _Ω(Γ) is equal to the proximal limit set Λ_Γ=ξ(∂_∞Γ).By Proposition <ref>, we have Λ_Γ^*=ξ^*(∂_∞Γ)⊂∂Ω^*, hence x∉ξ^*(η') for all x∈Ω and η'∈∂_∞Γ. We then apply Lemma <ref>.(<ref>) to get both Λ_Γ⊂_Ω(Γ) and _Ω(Γ) ⊂Λ_Γ.§.§ Proof of Proposition <ref>Suppose Γ⊂(V), Λ_Γ, Λ_Γ^*, and Ω_max are as in the statement of Proposition <ref>. Let _0 be the convex hull of Λ_Γ in Ω_max. This is clearly defined if Ω_max is properly convex, and otherwise it is defined as follows.The preimage of Ω_max in V∖{0} has two connected components Ω_max and -Ω_max. The intersection of their closures in V is a linear subspace W of V (which is reduced to {0} if Ω_max is properly convex, in which case (W) = ∅). For any complementary subspace W' of W in V, we can write Ω_max as a product of the form W ×Ω' for some properly convex open cone Ω' ⊂ W'. Since Ω' is properly convex, we can choose a supporting hyperplane of Ω_max in V whose intersection with ∂Ω_max is reduced to W. Then Ω_max is contained, and convex, in the corresponding affine chart of (V), and escapes to infinity precisely in the directions of (W).In (V), every projective hyperplane missing Ω_max, and in particular every x^*∈Λ_Γ^*, contains (W). Since Λ_Γ and Λ_Γ^* are transverse, this implies that Λ_Γ is disjoint from (W), hence (by compactness) it is bounded in the chart.By definition, _0 is the convex hull of Λ_Γ in the chart: it is independent of the choice of chart, as it can be lifted to a properly convex cone contained in the convex cone Ω_max.We have _0 = Λ_Γ.By construction, _0 = _0∩∂Ω_max⊃Λ_Γ. Suppose a point z ∈_0 lies in ∂Ω_max. Then z lies in a hyperplane ξ^*(η) for some η∈∂_∞Γ. Any minimal subset S of Λ_Γ for which z lies in the convex hull of S must also be contained in ξ^*(η). By the transversality of ξ and ξ^*, we must have S = {ξ(η)}, hence z = ξ(η) ∈Λ_Γ. There exists a Γ-invariant properly convex open subset Ω⊂Ω_max containing _0.If Ω_max is properly convex, then we take Ω = Ω_max. Suppose that Ω_max fails to be properly convex (see Example <ref>). Choose projective hyperplanes H_1, …, H_n bounding an open simplex Δ containing _0. Define 𝒱:=Ω_max∩Δ andΩ := ⋂_γ∈Γγ·𝒱.Then Ω contains _0 and is properly convex. We must show Ω is open. Suppose not and let z ∈Ω. Then there is a sequence (γ_m) ∈Γ^ and i ∈{1, …, n} such that γ_m· H_i converges to a hyperplane H_∞ containing z. By Lemma <ref>.(<ref>), the hyperplane H_∞ lies in Λ_Γ^*, contradicting the fact that the hyperplanes of Λ_Γ^* are disjoint from Ω_max. For any Γ-invariant properly convex open subset Ω of (V) containing _0, we have _Ω(Γ) = _0.By Corollary <ref>, we have _Ω(Γ) = Λ_Γ, hence _Ω(Γ) is the convex hull of Λ_Γ in Ω, _Ω(Γ)=_0∩Ω. But _0 is contained in Ω by construction (see Lemma <ref>), hence _Ω(Γ) = _0. In order to conclude the proof of Proposition <ref>, it only remains to check the following.The action of Γ on _0 is cocompact.We endow (V) with the spherical metric d_(·, ·) induced by some Euclidean norm on V. By <cit.> (see also <cit.>), the action of Γ on (V) at any point z∈Λ_Γ is expanding: there exist an element γ∈Γ, a neighborhood 𝒰 of z in (V), and a constant c>1 such that γ is c-expanding on 𝒰 for d_. We now use a version of the argument of <cit.>, inspired by Sullivan's dynamical characterization <cit.> of convex cocompactness in the real hyperbolic space. (The argument in <cit.> is a little more technical because it deals with bundles, whereas we work directly in (V).)Suppose by contradiction that the action of Γ on _0 is not cocompact, and let (ε_m)_m∈ be a sequence of positive reals going to 0. For any m, the set K_m := { x∈_0|d_(x,Λ_Γ) ≥⁠ε_m} is compact (Lemma <ref>), hence there exists a Γ-orbit contained in _0∖ K_m.By proper discontinuity of the action on _0, the supremum of d_(·,Λ_Γ) on this orbit is achieved at some point x_m∈_0, and by construction 0 < d_𝕊(x_m,Λ_Γ) ≤ε_m. Then, for all γ∈Γ,d_(γ· x_m,Λ_Γ) ≤ d_(x_m,Λ_Γ).Up to extracting, we may assume that (x_m)_m∈ converges to some z∈Λ_Γ. Consider an element γ∈Γ, a neighborhood 𝒰 of z in (V), and a constant c>1 such that γ is c-expanding on 𝒰. For any m∈, there exists z_m∈Λ_Γ such that d_(γ· x_m,Λ_Γ) = d_(γ· x_m,γ· z_m). For large enough m we have x_m,z_m∈𝒰, and sod_(γ· x_m,Λ_Γ) ≥ cd_( x_m, z_m) ≥ cd_(x_m,Λ_Γ) ≥ cd_(γ· x_m,Λ_Γ)>0.This is impossible since c>1. This shows that the word hyperbolic group Γ acts convex cocompactly on any Γ-invariant properly convex open subset Ω of (V) containing _0, hence condition <ref> of Theorem <ref> is satisfied. §.§ The case of groups with connected boundary Here is an immediate consequence of Propositions <ref> and <ref> and Lemma <ref>.Let Γ be an infinite discrete subgroup of (V) which is word hyperbolic with connected boundary ∂_∞Γ, which preserves a properly convex open subset of (V), and such that the natural inclusion Γ↪(V) is P_1-Anosov. Then the setΩ_max := (V) ∖⋃_z^*∈Λ_Γ^* z^*is a nonempty Γ-invariant convex open subset of (V), containing all other such sets. If Ω_max is properly convex (if the action of Γ on (V) is irreducible), then Γ acts convex cocompactly on Ω_max.§ SMOOTHING OUT THE NONIDEAL BOUNDARYWe now make the connection with strong projective convex cocompactness and prove the remaining implications of Theorems <ref> and <ref>.Concerning Theorem <ref>, the equivalences <ref> ⇔ <ref> ⇔ <ref> ⇔ <ref> have been proved in Section <ref>, the implication <ref> ⇒ <ref> in Section <ref>, and the implications <ref> ⇔ <ref> ⇒ <ref>in Section <ref>. The implication <ref> ⇒ <ref> is trivial. We shall prove <ref> ⇒ <ref> in Section <ref>, which will complete the proof of Theorem <ref>. It will also complete the proof of Theorem <ref>, since the implication (<ref>) ⇒ (<ref>) of Theorem <ref> is the implication <ref> ⇒ <ref> of Theorem <ref>.Concerning Theorem <ref>, the equivalence (<ref>) ⇔ (<ref>) has been proved in Section <ref>. The implication (<ref>) ⇒ (<ref>) is immediate. We shall prove the implication (<ref>) ⇒ (<ref>) in Section <ref>. The implication (<ref>) ⇒ (<ref>) is an immediate consequence of Lemma <ref> and of the following lemma, and completes the proof of Theorem <ref>.We refer to Definition <ref> for the various notions of boundary regularity used throughout this section.Let _𝗌𝗍𝗋𝗂𝖼𝗍 be a nonempty convex subset of (V) with strictly convex nonideal boundary _𝗌𝗍𝗋𝗂𝖼𝗍 and whose ideal boundary _𝗌𝗍𝗋𝗂𝖼𝗍 is closed in (V). Then _𝗌𝗍𝗋𝗂𝖼𝗍 has bisaturated boundary.Let H be a supporting hyperplane to _𝗌𝗍𝗋𝗂𝖼𝗍. Then H ∩(_𝗌𝗍𝗋𝗂𝖼𝗍) = H ∩_𝗌𝗍𝗋𝗂𝖼𝗍 is a closed, convex subset of the frontier (_𝗌𝗍𝗋𝗂𝖼𝗍) = _𝗌𝗍𝗋𝗂𝖼𝗍∪_𝗌𝗍𝗋𝗂𝖼𝗍. Since _𝗌𝗍𝗋𝗂𝖼𝗍 is closed, so is H ∩_𝗌𝗍𝗋𝗂𝖼𝗍, hence H ∩_𝗌𝗍𝗋𝗂𝖼𝗍 is open in H ∩(_𝗌𝗍𝗋𝗂𝖼𝗍). However, _𝗌𝗍𝗋𝗂𝖼𝗍 is strictly convex. Therefore, either H ∩_𝗌𝗍𝗋𝗂𝖼𝗍 is empty, or H ∩_𝗌𝗍𝗋𝗂𝖼𝗍 = H ∩(_𝗌𝗍𝗋𝗂𝖼𝗍) is a single point. In particular, H ∩(_𝗌𝗍𝗋𝗂𝖼𝗍) is entirely contained in _𝗌𝗍𝗋𝗂𝖼𝗍 or entirely contained in _𝗌𝗍𝗋𝗂𝖼𝗍. This shows that _𝗌𝗍𝗋𝗂𝖼𝗍 has bisaturated boundary.§.§ Smoothing out the nonideal boundary Here is the main result of Section <ref>. Let Γ be an infinite discrete subgroup of (V) and Ω a nonempty Γ-invariant properly convex open subset of (V). Suppose Γ acts convex cocompactly on Ω. Fix a uniform neighborhood _𝗎𝗇𝗂𝖿 of _Ω(Γ) in (Ω,d_Ω). Then the convex core _Ω(Γ) admits a Γ-invariant, properly convex, closed neighborhood ⊂_𝗎𝗇𝗂𝖿 in Ω which has C^1, strictly convex nonideal boundary. Constructing a neighborhood  as in the lemma clearly involves arbitrary choices; here is one of many possible constructions, taken from <cit.>. Cooper–Long–Tillmann <cit.> give a different construction yielding, in the case that Γ is torsion-free, a convex setas in the lemma whose nonideal boundary has the slightly stronger property that it is locally the graph of a smooth function with positive definite Hessian. In this proof, we fix a finite-index subgroup Γ_0 of Γ which is torsion-free; such a subgroup exists by the Selberg lemma <cit.>.We proceed in three steps. Firstly, we construct a Γ-invariant closed neighborhood _ of _Ω(Γ) in Ω which is contained in _𝗎𝗇𝗂𝖿 and whose nonideal boundary is C^1 but not necessarily strictly convex. Secondly, we construct a small deformation _⊂_𝗎𝗇𝗂𝖿 of _ which has C^1 and strictly convex nonideal boundary, but which is only Γ_0-invariant, not necessarily Γ-invariant. Finally, we use an averaging procedure over translates γ·_ of _, for γΓ_0 ranging over the Γ_0-cosets of Γ, to construct a Γ-invariant closed neighborhood ⊂_𝗎𝗇𝗂𝖿 of _Ω(Γ) in Ω which has C^1 and strictly convex nonideal boundary.∙ Construction of _: Consider a compact fundamental domain 𝒟 for the action of Γ on _Ω(Γ). The convex hull of 𝒟 in Ω is still contained in _Ω(Γ). Let 𝒟'⊂_𝗎𝗇𝗂𝖿 be a closed neighborhood of this convex hull in Ω which has frontier of class C^1, and let _⊂_𝗎𝗇𝗂𝖿 be the closure in Ω of the convex hull of Γ·𝒟'. By Corollary <ref>.(<ref>) we have _=_Ω(Γ), and by Lemma <ref> the set _ has bisaturated boundary.Let us check that _ has C^1 nonideal boundary. We first observe that any supporting hyperplane Π_x of _ at a point x∈_ stays away from _ since _ has bisaturated boundary. On the other hand, since the action of Γ on _ is properly discontinuous, for any neighborhood 𝒩 of _ in (V) and any infinite sequence of distinct elements γ_j∈Γ, the translates γ_j ·𝒟' are eventually all contained in 𝒩. Therefore, in a neighborhood of x, the hypersurface (_) coincides with the convex hull of a finite union of translates ⋃_i=1^m γ_i·𝒟', and so it is locally C^1: indeed that convex hull is dual to ⋂_i=1^m (γ_i·𝒟')^*, which has strictly convex frontier because (𝒟')^* does (a convex set has C^1 frontier if and only if its dual has strictly convex frontier).∙ Construction of _: For any x∈_, let F_x be the open face of (_) at x, namely the intersection of _ with the unique supporting hyperplane Π_x at x. Since _ has bisaturated boundary, F_x is a compact convex subset of _.We claim that F_x is disjoint from γ· F_x=F_γ· x for all γ∈Γ_0∖{Id}. Indeed, if there existed y∈ F_x∩ F_γ· x, then by uniqueness the supporting hyperplanes would satisfy Π_x=Π_y=Π_γ· x, hence F_x = F_y = F_γ· x=γ· F_x. This would imply F_x = γ^m· F_x for all m∈, hence γ^m· x∈ F_x. Using the fact that the action of Γ_0 on _ is properly discontinuous and taking a limit, we see that F_x would contain a point of _, which we have seen is not true. Therefore F_x is disjoint from γ· F_x for all γ∈Γ_0∖{Id}.For any x∈_, the subset of (V^*) consisting of those projective hyperplanes near the supporting hyperplane Π_x that separate F_x from _ is open and nonempty, hence (n-1)-dimensional where (V)=n. Choose n-1 such hyperplanes Π_x^1,…, Π_x^n-1 in generic position, with Π_x^i cutting off a compact region 𝒬_x^i⊃ F_x from _. One may imagine each Π_x^i is obtained by pushing Π_x normally into _ and then tilting slightly in one of n-1 independent directions. The intersection ⋂_i=1^n-1Π_x^i ⊂(V) is reduced to a singleton. By taking each hyperplane Π_x^i very close to Π_x, we may assume that the union 𝒬_x:=⋃_i=1^n-1𝒬_x^i is disjoint from all its γ-translates for γ∈Γ_0 ∖{Id}. In addition, we ensure that F_x has a neighborhood 𝒬'_x contained in ⋂_i=1^n-1𝒬_x^i.Since the action of Γ_0 on _ is cocompact, there exist finitely many points x_1,…,x_m∈_ such that _⊂Γ_0· (𝒬'_x_1∪…∪𝒬'_x_m). We now explain, for any x∈_, how to deform _ into a new, smaller properly convex Γ_0-invariant closed neighborhood of _Ω(Γ) in Ω with C^1 nonideal boundary, in a way that destroys all segments in 𝒬'_x. Repeating for x=x_1, …, x_m, this will produce a properly convex Γ_0-invariant closed neighborhood _ of _Ω(Γ) in Ω with C^1 and strictly convex nonideal boundary.Choose an affine chart containing Ω, and an auxiliary Euclidean metric g on this chart. We may assume that for every 1≤ i ≤ n-1, the g-orthogonal projection π_x^i onto Π_x^i defines a homeomorphism 𝒬_x^i ∩_→_∩Π_x^i. Define a map φ_x^i: 𝒬_x^i →𝒬_x^i by the property that φ_x^i preserves each fiber (π_x^i)^-1(y) ∩𝒬_x^i (a segment), taking the point at distance t from y to the point at distance tanh(t). We claim that φ_x^i(𝒬_x^i) ∪ (_∖𝒬_x^i) is convex with boundary of class C^1 and that, further, any segment in φ_x^i(_∩𝒬_x^i) is the image of a segment of _∩𝒬_x^i which is parallel to Π_x^i. To see this, decompose the affine chart orthogonally as Π_x^i ⊕, so that 𝒬_x^i ∩_ is the graph of a concave function f_x^i: _∩Π_x^i → of class C^1, vanishing on the boundary _∩Π_x^i.Replacing 𝒬_x^i with φ_x^i(𝒬_x^i) amounts to replacing f_x^i with tanh∘ f_x^i. The claim follows from the fact that tanh is strictly concave, monotonic, smooth, and tangent to the identity at 0 (any other function with these properties may be used in place of tanh).Extending φ_x^i by the identity on 𝒬_x∖𝒬_x^i and repeating with varying i, we find that the composition φ_x:=φ_x^1∘…∘φ_x^n-1, defined on 𝒬_x, takes 𝒬'_x∩_ to a strictly convex hypersurface. Indeed, a segment of 𝒬'_x∩_ cannot be parallel to all of Π_x^1, …, Π_x^n-1 since these hyperplanes were chosen to intersect in a singleton. We can extend φ_x in a Γ_0-equivariant fashion to Γ_0·𝒬_x, and extend it further by the identity on the rest of _: the set φ_x(_) is still a Γ_0-invariant closed neighborhood of _Ω(Γ) in Ω, contained in _𝗎𝗇𝗂𝖿, with C^1 nonideal boundary.Repeating with finitely many points x_1, …, x_m as above, we obtain a Γ_0-invariant properly convex closed neighborhood _⊂_𝗎𝗇𝗂𝖿 of _Ω(Γ) in Ω with C^1 and strictly convex boundary.∙ Construction of : Consider the finitely many Γ_0-cosets γ_1Γ_0, …, γ_kΓ_0 of Γ and the corresponding translates _^i:=γ_i·_; we denote by Ω^i the interior of _^i. Let ' be a Γ-invariant properly convex closed neighborhood of _Ω(Γ) in Ω which has C^1 (but not necessarily strictly convex) nonideal boundary and is contained in all _^i, 1≤ i≤ k. (Such a neighborhood ' can be constructed for instance by the same method as _ above.) Since _^i has strictly convex nonideal boundary, uniform neighborhoods of ' in (Ω^i,d_Ω^i) have strictly convex nonideal boundary <cit.>. Therefore, by cocompactness, if h : [0,1]→ [0,1] is a convex function with sufficiently fast growth (h(t)=t^α for large enough α>0), then the Γ_0-invariant function H_i := h∘ d_Ω^i(·, ') is convex on the convex region H_i^-1([0,1]), and in fact smooth and strictly convex near every point outside '. The function H:=∑_i=1^k H_i is Γ-invariant and its sublevel set :=H^-1([0,1]) is a Γ-invariant closed neighborhood of _Ω(Γ) in Ω which has C^1, strictly convex nonideal boundary. Moreover, ⊂_⊂_⊂_𝗎𝗇𝗂𝖿 by construction.§.§ Proof of the remaining implications of Theorems <ref> and <ref> Suppose Γ is convex cocompact in (V), it preserves a properly convex open set Ω⊂(V) and acts cocompactly on the convex core _Ω(Γ). Let _𝗎𝗇𝗂𝖿 be the closed uniform 1-neighborhood of _Ω(Γ) in (Ω,d_Ω). By Lemma <ref>, the set _𝗎𝗇𝗂𝖿 is properly convex and the action of Γ on _𝗎𝗇𝗂𝖿 is properly discontinuous and cocompact. By Lemma <ref>, the set _Ω(Γ) admits a Γ-invariant, properly convex, closed neighborhood _𝗌𝗆𝗈𝗈𝗍𝗁⊂_𝗎𝗇𝗂𝖿 in Ω which has C^1, strictly convex nonideal boundary. The action of Γ on _𝗌𝗆𝗈𝗈𝗍𝗁 is still properly discontinuous and cocompact, and _𝗌𝗆𝗈𝗈𝗍𝗁 = _Ω(Γ) by Corollary <ref>.(<ref>). This proves the implication (<ref>) ⇒ (<ref>) in Theorem <ref>.Supposethat Γ acts convex cocompactly on a properly convex open set Ω and that the full orbital limit set _Ω(Γ) contains no nontrivial segment. As in the proof of (<ref>) ⇒ (<ref>) in Theorem <ref> just above, Γ acts properly discontinuously and cocompactly on some nonempty closed properly convex subset _𝗌𝗆𝗈𝗈𝗍𝗁 of Ω which has strictly convex and C^1 nonideal boundary and whose interior Ω_𝗌𝗆𝗈𝗈𝗍𝗁 := (_𝗌𝗆𝗈𝗈𝗍𝗁) contains _Ω(Γ). The set _𝗌𝗆𝗈𝗈𝗍𝗁 has bisaturated boundary (Lemma <ref>), hence the action of Γ on Ω_𝗌𝗆𝗈𝗈𝗍𝗁 is convex cocompact by Corollary <ref>.By Corollary <ref>.(<ref>), the ideal boundary _𝗌𝗆𝗈𝗈𝗍𝗁 is equal to _Ω(Γ). Since _Ω(Γ) contains no nontrivial segment by assumption, and since it is a union of faces of ∂Ω (Corollaries <ref>.(<ref>) and <ref>), we deduce that any z ∈_𝗌𝗆𝗈𝗈𝗍𝗁 is an extreme point of ∂Ω. Thus the full boundary of Ω_𝗌𝗆𝗈𝗈𝗍𝗁 is strictly convex.Consider the dual convex set _𝗌𝗆𝗈𝗈𝗍𝗁^* ⊂(V^*) (Definition <ref>); it is properly convex. By Lemma <ref>, the set _𝗌𝗆𝗈𝗈𝗍𝗁^* has bisaturated boundary and does not contain any PET (since _𝗌𝗆𝗈𝗈𝗍𝗁 itself does not contain any PET). By Proposition <ref>, the action of Γ on _𝗌𝗆𝗈𝗈𝗍𝗁^* is properly discontinuous and cocompact. It follows from Lemma <ref> that _𝗌𝗆𝗈𝗈𝗍𝗁^* contains no nontrivial segment, hence each point of _𝗌𝗆𝗈𝗈𝗍𝗁^* is an extreme point.Hence there is exactly one hyperplane supporting _𝗌𝗆𝗈𝗈𝗍𝗁 at any given point of _𝗌𝗆𝗈𝗈𝗍𝗁. This is also true at any given point of _𝗌𝗆𝗈𝗈𝗍𝗁 by assumption. Thus the boundary of Ω_𝗌𝗆𝗈𝗈𝗍𝗁 is of class C^1.§ PROPERTIES OF CONVEX COCOMPACT GROUPSIn this section we prove Theorem <ref>. Property <ref> has already been established in Section <ref>; we now establish the other properties. §.§ <ref>: Quasi-isometric embedding We first prove the following very general result, using the notation μ_1-μ_n from (<ref>). The fact that a statement of this flavor should exist was suggested to us by Yves Benoist and Pierre-Louis Blayac.For any properly convex open subset Ω of (V) = (^n) and any x∈Ω, there exists κ_Ω,x≥ 0 such that for any g∈Aut(Ω),| d_Ω(x,g· x) - 1/2 (μ_1-μ_n)(g) | ≤κ_Ω,x.Moreover, κ_Ω,x can be taken to be uniform as (Ω,x) varies in a compact subset of the set of pointed properly convex open subsets of (V), endowed with the pointed Hausdorff topology. Note that we can take κ_Ω,x = 0 in the following examples, where (e_1,…,e_n) is the standard basis of V=^n: * Ω = ^n-1 = { [v] ∈(^n)|v_1^2 + … + v_n-1^2 - v_n^2 < 0} (see Example <ref>) and x = [e_n];* Ω = (^+-span(e_1,…,e_n)) and x = [e_1 + … + e_n]. Here is an easy consequence of the inequality (μ_1-μ_n)(g)≥ 2(d_Ω(x,g· x) -κ_Ω,x) of Proposition <ref>.Let Γ be a discrete subgroup of G:=(V). *If Γ is naively convex cocompact in (V) (Definition <ref>), then it is finitely generated and the natural inclusion Γ↪ G is a quasi-isometric embedding.*In particular, for any subgroup Γ' of Γ, if Γ' is naively convex cocompact in (V), then it is finitely generated and the natural inclusion Γ'↪Γ is a quasi-isometric embedding.Here the finitely generated groups Γ and Γ' are endowed with the word metric with respect to some fixed finite generating subset. The group G is endowed with any G-invariant Riemannian metric d_G. Let K=(n) be the maximal compact subgroup of G=(V) from Section <ref>. Let p:=Id K∈ G/K. There is a G-invariant Finsler metric d on G/K such that d(p,g· p) = (μ_1-μ_n)(g) for all g∈ G.(<ref>) Suppose Γ is naively convex cocompact in (V): it preserves a properly convex open subset Ω of (V) and acts cocompactly on some nonempty Γ-invariant closed convex subsetof Ω. By the Švarc–Milnor lemma, Γ is finitely generated and any orbital map Γ→ (,d_Ω) is a quasi-isometry. Proposition <ref> then implies that any orbital map Γ→ (G/K,d) is a quasi-isometric embedding. Since K is compact, the natural inclusion Γ↪ G is a quasi-isometric embedding.(<ref>) Let length_Γ : Γ→ and length_Γ' : Γ'→ be the word length functions of Γ and Γ' for our fixed finite generating subsets. By the triangle inequality, there exists κ>0 such that d_G(γ,Id) ≤κ length_Γ(γ) for all γ∈Γ. By (<ref>), if Γ' is naively convex cocompact in (V), then there exist κ_1,κ_2>0 such that d_G(γ',Id) ≥κ_1 length_Γ'(γ') - κ_2 for all γ'∈Γ, hence length_Γ(γ') ≥κ_1κ^-1 length_Γ'(γ') - κ_2κ^-1 for all γ'∈Γ' and Γ'↪Γ is a quasi-isometric embedding.We endow ℙ(V) with the spherical metric d_𝕊, for which any projective line is geodesic, of length π. By the spherical sine law, two lines ℓ, ℓ' ⊂ℙ(V) intersecting in a point b at an angle α∈ [0,π/2] get no further than α apart, and for all y∈ℓ,sin d_𝕊(y,ℓ')/sinα = sin d_𝕊(y,b)/sinπ/2 = sin d_𝕊(y,b).Any tangent vector v∈ T(V) can be written as v = d/dt|_t=0[u+tw] where u,w∈^n are orthogonal for the standard Euclidean norm ‖·‖_Euc on ^n preserved by K̂ = (n), and u≠ 0; the norm ‖ v‖_ of v for the spherical metric is then ‖ w‖_Euc/‖ u‖_Euc.We claim that any element g∈(V), seen as a homeomorphism of (V) = (^n), is e^(μ_1-μ_n)(g)-bi-Lipschitz for d_𝕊. Indeed, since (μ_1-μ_n)(g^-1) = (μ_1-μ_n)(g), it is enough to prove the Lipschitz direction.Since d_𝕊 is a geodesic metric, it suffices to check that ‖ Dg (v) ‖_≤ e^(μ_1-μ_n)(g)‖ v‖_ for any tangent vector v∈ T(V). We can lift g to an element ĝ of (V), with singular values e^μ_1(ĝ)≥…≥ e^μ_n(ĝ) satisfying μ_1(ĝ)- μ_n(ĝ) = (μ_1-μ_n)(g).Writing v = d/dt|_t=0[u+tw] where u,w∈^n are orthogonal for ‖·‖_Euc and u≠ 0, we have Dg(v) = d/dt|_t=0[ĝ· u+t w'] where w' is the orthogonal projection of ĝ· w to the orthogonal of ĝ· u. The bounds ‖ĝ· u ‖_Euc≥ e^μ_n(ĝ)‖ u‖_Euc and ‖ w'‖_Euc≤‖ĝ· w ‖_Euc≤ e^μ_1(ĝ)‖ w‖_Euc yield‖ Dg(v)‖_= ‖ w'‖_Euc/‖ĝ· u‖_Euc≤ e^μ_1(ĝ)-μ_n(ĝ)‖ w ‖_Euc/‖ u ‖_Euc = e^(μ_1-μ_n)(g)‖ v ‖_,proving the claim.For any x∈Ω and any ε>0, we denote by 𝔹_𝕊(x,ε) (𝔹_Ω(x,ε)) the open ball centered at x∈Ω with radius ε>0 in ((V),d_𝕊) (in (Ω,d_Ω)). Since Ω is properly convex, we can choose 0< ε < π/4, depending only on (Ω, x), such that *𝔹_𝕊(x,ε) ⊂𝔹_Ω(x,1);*𝔹_𝕊(x,2ε)⊂Ω;*every segment of Ω has d_𝕊-length at most π-2ε. Fix g ∈Aut(Ω). We claim that there exists x' ∈𝔹_𝕊(x,ε) such thate^-(μ_1-μ_n)(g)sinε≤sin d_𝕊(g· x', ∂Ω) ≤ e^-(μ_1-μ_n)(g) (sinε)^-3.Indeed, as above, lift g to ĝ∈(V) with singular values e^μ_1(ĝ)≥…≥ e^μ_n(ĝ) satisfying μ_1(ĝ)- μ_n(ĝ) = (μ_1-μ_n)(g). We can write ĝ = k a k' where k,k' ∈K̂ = (n) and a = diag(e^μ_1(ĝ), …, e^μ_n(ĝ)) ∈exp(𝔞̂^+) (see Section <ref>). Let (e_1, …, e_n) be the standard basis of V = ^n, orthonormal for ‖·‖_Euc. Set v_i := k'^-1· e_i and consider the projective hyperplane H := (span{v_2,…, v_n}). There exists x'∈𝔹_𝕊(x, ε) such that d_𝕊(x', H)≥⁠ε. Note that any segment realizing d_𝕊(g· x', ∂Ω) gets mapped by g^-1 to a segment of length ≥ε, by <ref>.Since g^-1 is an e^(μ_1-μ_n)(g)-Lipschitz homeomorphism of ((V), d_𝕊), concavity of sin yields the lower bound in (<ref>). We now check the upper bound. For this, consider δ<ε such that sinδ = (sinε)^2. By (<ref>), since d_(x', H)≥ε,the line through [v_n] and x' forms an angle ≥ε with H. Again by (<ref>), this line therefore contains a segment of length ≥π-2ε that stays outside the δ-neighborhood 𝒩 of H.By <ref>, that segment must exit Ω at some z ∈∂Ω∖𝒩. We have d_𝕊(g· x', ∂Ω) ≤ d_𝕊(g· x', g· z), hence it is enough to bound d_𝕊(g· x', g· z) from above. Since z∈(V)∖𝒩, a unit lift z of z satisfies |⟨z, v_1 ⟩ | ≥sinδ = (sinε)^2, |⟨ k' ·z, e_1 ⟩ | ≥ (sinε)^2, so that ‖ĝ·z‖_Euc = ‖ ak' ·z‖_Euc≥ e^μ_1(ĝ) (sinε)^2.Since x', [v_n],z ∈(V) are collinear, x' has a unique lift of the form x' = v_n+t z∈ V, and |t|≥sinε since d_𝕊(x', [v_n]) ≥ d_𝕊(x', H)≥ε. Then ‖ĝ·z - ĝ· (1/tx') ‖_Euc = e^μ_n(ĝ)/|t| ≤ e^μ_n(ĝ)/sinε. Using (<ref>), we deducesin d_𝕊(g· x', ∂Ω) ≤sin d_𝕊(g· x', g· z) ≤ e^μ_n(ĝ)-μ_1(ĝ) (sinε)^-3,which completes the proof of (<ref>).The set 𝒦_ε := ⋃_u∈∂Ω( [x,u) ∖𝔹_𝕊(u, ε) ) is compact in Ω.Since Aut(Ω) acts properly on Ω, the setF_ε := {h∈Aut(Ω) | 𝒦_ε∩ h ·𝔹_𝕊(x,ε)≠∅}is compact in (V), hence κ':=max_h∈ F_ε | d_Ω(x,h· x) - 1/2 (μ_1-μ_n)(h)| is finite. Suppose that our given g∈Aut(Ω) lies outside F_ε.Let ℓ be the projective line through x and g· x', crossing ∂Ω at points a and b with a,x,g· x',b aligned in this order (see Figure <ref>).Since g ∉ F_ε and x'∈𝔹_𝕊(x,ε), we have ε' := d_𝕊(g· x',b) ≤ε. Let y ↦θ_y ∈/ πℤ be the d_𝕊-arclength parametrization of the projective line ℓ such that θ_a = 0 and θ_y ∈ [0, π - 2 ε] for every y ∈Ω∩ℓ. Then y ↦tanθ_y is a projective identification between ℓ and ^1() = ∪{∞}, and soaxg · x'b= 0tanθ_xtanθ_g· x'tanθ_b = tanθ_g· x'(tanθ_b - tanθ_x)/tanθ_x (tanθ_b - tanθ_g· x') = sinθ_g· x' sin (θ_b-θ_x)/sinθ_xsin (θ_b-θ_g· x') = sinθ_g· x' sin (θ_b-θ_x)/sinθ_xsinε'.Each of θ_x, θ_b-θ_x, and θ_g· x' lies in [ε, π-2ε] by <ref> and <ref>, hence their sines lie in [sinε, 1], and soaxg · x'b∈ (sinε')^-1[(sinε)^2, (sinε)^-1].On the other hand, we claim thatsinε' ∈ e^(μ_n-μ_1)(g)[sinε, (sinε)^-4].Indeed, the lower bound comes from (<ref>).For the upper bound, consider the point z ∈∂Ω introduced above, satisfying (<ref>). There is a line ℓ' tangent to ∂Ω at g· z and intersecting ℓ in some point b'∈(V)∖Ω at some angle α; the lines ℓ and ℓ' get at least ε apart for d_𝕊 by <ref>, hence α≥ε by (<ref>). On the other hand, ℓ∩𝔹_𝕊(g· x', ε') ⊂Ω, using ε'≤ε and <ref>, hence ε' = d_𝕊 (g· x',b) ≤ d_𝕊(g· x',b'). Using (<ref>) again, we obtainsinε' ≤sin d_𝕊 (g· x',b')= sin d_𝕊 (g· x', ℓ')/sinα≤sin d_𝕊 (g· x', g· z)/sinε.Together with (<ref>), this implies (<ref>).Combining (<ref>) and (<ref>), we find that e^2 d_Ω(x,g· x') = axg· x'b lies in e^(μ_1-μ_n)(g) [(sinε)^6, (sinε)^-2]. Since d_Ω(g· x,g· x')≤ 1by <ref>, the triangle inequality yields| d_Ω(x,g· x) - 1/2 (μ_1-μ_n)(g)| ≤ 3 |logsinε|+1 =:κ”. Thus (<ref>) holds with κ_Ω, x=max{κ', κ”}.Finally, let us address (local) uniformity in terms of (Ω, x). The number ε>0 subject to <ref>–<ref>–<ref> can be chosen uniformly. The compact set 𝒦_ε⊂Ω in the above proof then stays a uniform d_𝕊-distance away from ∂Ω.It follows that d_Ω(x, g· x) is uniformly bounded for all g∈ F_ε. In fact, the property g∈ F_ε imposes some uniform lower bound on d_𝕊(g· x', ∂Ω) for all x'∈𝔹_𝕊(x,ε). The upper bound of (<ref>) then gives a uniform bound on (μ_1-μ_n)(F_ε), F_ε stays within a uniform compact subset of (V). Therefore κ' is also (locally) uniform, and so is κ_Ω, x.§.§ <ref>: No unipotent elementsFor any element g∈(V) preserving a properly convex open subset Ω of (V), we define the translation length of g in Ω to belength_Ω(g) := inf_x∈Ω d_Ω(x,g· x) ≥ 0 ;we say that g achieves its translation length in Ω if this infimum is a minimum. It is easy to check (see <cit.>) that length_Ω(g) = (λ_1-λ_n)(g)/2 (see (<ref>)).Property <ref> of Theorem <ref> is contained in the following more general statement.Let Γ be a discrete subgroup of (V) which is naively convex cocompact in (V) (Definition <ref>), Γ preserves a properly convex open subset Ω of (V) and acts cocompactly on some nonempty closed convex subsetof Ω. Then any element γ∈Γ achieves its translation length in Ω. In particular (see <cit.>), any infinite-order element of Γ lifts to an element of ^±(V) with an eigenvalue of modulus >1, and so Γ does not contain any unipotent element.By Lemma <ref>, up to replacingby some closed uniform neighborhood in (Ω,d_Ω), we may assume thathas nonempty interior. Let γ∈Γ. We havelength_Ω(γ) ≤inf_x∈ d_Ω(x,γ· x) ≤inf_x∈() d_Ω(x,γ· x) ≤length_()(γ),where the last inequality follows from the definition (<ref>) of the Hilbert metric. In fact all these inequalities are equalities since length_Ω(γ) and length_()(γ) are both equal to (λ_1-λ_n)(γ). Thus we only need to check that the infimum R := inf_x∈ d_Ω(x,γ· x) is a minimum.Consider (x_m)∈^ such that d_Ω(x_m,γ· x_m)→ R. For any m, there exists γ_m∈Γ such that γ_m· x_m belongs to some fixed compact fundamental domain for the action of Γ on . Up to passing to a subsequence, (γ_m· x_m)_m∈ converges to some x∈, and d_Ω(x_m,γ· x_m)≤ R+1 for all m, which means that γ_mγγ_m^-1∈Γ sends γ_m· x_m∈ at distance ≤ R+1 from itself. Since the action of Γ on  is properly discontinuous, we deduce that the discrete sequence (γ_mγγ_m^-1)_m∈ is bounded; up to passing to a subsequence, we may therefore assume that it is constant, equal to γ_∞γγ_∞^-1 for some γ_∞∈Γ. We then haveR = lim_m d_Ω(x_m, γ· x_m)= lim_m d_Ω(γ_m· x_m, (γ_mγγ_m^-1)· (γ_m · x_m)) = d_Ω(x, γ_∞γγ_∞^-1· x)= d_Ω(y, γ· y),where y:=γ_∞^-1· x.§.§ <ref>: Stability Property <ref> of Theorem <ref> follows from the equivalence (<ref>) ⇔ (<ref>) of Theorem <ref> and from the work of Cooper–Long–Tillmann (namely <cit.> with empty collection 𝒱 of generalized cusps). Indeed, the equivalence (<ref>) ⇔ (<ref>) of Theorem <ref> states that Γ is convex cocompact in (V) if and only if it acts properly discontinuously and cocompactly on a properly convex subsetof (V) with strictly convex nonideal boundary, and this condition is stable under small perturbations by <cit.>.Let us give a brief sketch of why this is true. The ideas are taken from <cit.> and go back to Koszul <cit.>, who proved this in the case that = ∅. We assume Γ is torsion-free for simplicity. The quotient M = Γ\ is a compact convex projective manifold with boundary, whose boundary ∂ M = Γ\ is locally strictly convex. Let ρ: Γ→(V) be a small deformation of the inclusion map. By the Ehresmann–Thurston principle (see <cit.>), if ρ is a sufficiently small deformation, then it is still the holonomy representation of a projective structure on M. Further, since ∂ M is compact, one easily arranges that ∂ M remains locally strictly convex in the new projective structure. To see that this new projective structure is properly convex uses the following observation: A real projective structure on a manifold M with locally strictly convex boundary is properly convex if and only if the tautological line bundle ξ M → M, naturally an affine manifold, admits a strictly convex section. We have such a strictly convex section before deformation, and since M is compact, this section can be controlled if the deformation ρ is small enough. This gives the desired stability result. §.§ <ref>: Inclusion into a larger spaceThe following proposition implies property <ref> of Theorem <ref>, and also provides a converse to it.Let V'=^n' and let i : ^±(V) ↪^±(V ⊕ V') be the natural inclusion, whose image acts trivially on the second factor. Let Γ̂ be a discrete subgroup of ^±(V). Then Γ̂ is convex cocompact in (V) if and only if i(Γ̂) is convex cocompact in (V⊕ V'). The proof of Proposition <ref> builds on the following consequence of Lemma <ref> and Corollary <ref>.Let V'=^n', and let i : ^±(V) ↪^±(V ⊕ V')and j : V ↪ V⊕ V', and j^* : V^* ↪ (V⊕ V')^* ≃ V^* ⊕ (V')^* be the natural inclusions. Let Γ̂ be a discrete subgroup of ^±(V) preserving a nonempty properly convex open subset Ω of (V). *Let 𝒦 be a compact subset of (V⊕ V') that does not meet any projective hyperplane z^*∈ j^*(Ω^*). Then any accumulation point in (V⊕ V') of the i(Γ̂)-orbit of 𝒦 is contained in j(_Ω(Γ)).*Let 𝒦^* be a compact subset of ((V⊕ V')^*) whose elements correspond to projective hyperplanes disjoint from j(Ω) in (V⊕ V'). Then any accumulation point in ((V⊕ V')^*) of the i(Γ̂)-orbit of 𝒦^* is contained in j^*(_Ω^*(Γ)). The two statements are dual to each other, so we only need to prove (<ref>). Let π : (V⊕ V') ∖(V') →(V) be the map induced by the projection onto the first factor of V⊕ V', so that π∘ j is the identity of (V). Note that V' is contained in every projective hyperplane of (V⊕ V') corresponding to an element of j^*(Ω^*), hence π is defined on 𝒦. Consider a sequence (x_m)_m∈ of points of 𝒦 and a sequence (γ_m)_m∈ of pairwise disjoint elements of Γ̂ such that (i(γ_m)· x_m)_m∈ converges in (V⊕ V'). By construction, for any m, the point x_m does not belong to any hyperplane z^*∈ j^*(Ω^*), hence π(x_m) does not belong to any hyperplane in Ω^*, π(x_m)∈Ω. Since Γ̂ acts properly discontinuously on Ω, the point z := lim_m γ_m·π(x_m) is contained in ∂Ω. Up to passing to a subsequence, we may assume that (π(x_m))_m∈⊂π(𝒦) converges to some y ∈Ω. Then d_Ω(γ_m·π(x_m),γ_m· y) = d_Ω(π(x_m),y) → 0, and so γ_m· y→ z by Corollary <ref>. In particular, z ∈_Ω(Γ). Now lift x_m to a vector v_m + v'_m ∈ V ⊕ V', with (v_m)_m∈⊂ V⊕{0} and (v'_m)_m∈⊂{0}⊕ V' bounded. The image of v_m in (V) is π(x_m). By Lemma <ref>, the sequence (i(γ_m)· v_m)_m∈ tends to infinity in V. On the other hand, i(γ_m)· v'_m = v'_m remains bounded in V'. Therefore,lim_m i(γ_m)· x_m = lim_m i(γ_m)· j∘π(x_m) = lim_m j(γ_m·π(x_m)) ∈ j(_Ω(Γ)).In the setting of Lemma <ref>, if Γ acts convex cocompactly on Ω⊂(V), then the set j(Ω) is contained in a Γ-invariant properly convex open subset 𝒪 of (V⊕ V').We argue similarly to the proof of Lemma <ref>. The set j(Ω) is contained in the Γ-invariant open subset𝒪_max := (V⊕ V') ∖⋃_z^*∈ j^*(Ω^*) z^*of (V⊕ V'), which is convex but not properly convex. This set 𝒪_max is the union of all projective lines of (V⊕ V') intersecting both j(Ω) and ({0}× V'). Choose projective hyperplanes H_1, …, H_N of (V⊕ V') bounding an open simplex Δ containing j(Ω). Let 𝒰 := Δ∩𝒪_max. Define𝒪 := ⋂_γ∈Γ̂ i(γ)·𝒰.Then j(Ω)⊂𝒪⊂𝒪_max and 𝒪 is properly convex. We claim that 𝒪 is open. Indeed, suppose by contradiction that there exists a point x ∈𝒪. Then there exist (γ_m) ∈Γ̂^ and i ∈{1, …, N} such that γ_m· H_i converges to a hyperplane H_∞ containing x. This is impossible since, by Lemma <ref>.(<ref>), the hyperplane H_∞∈ j^*(_Ω(Γ)) supports the open set 𝒪_max which contains x.Suppose that Γ̂ acts convex cocompactly on some nonempty properly convex open subset Ω of (V). The set j(Ω) is contained in some Γ-invariant properly convex open subset 𝒪 of (V⊕ V') by Lemma <ref>, and _𝒪(i(Γ̂)) = j(_Ω(Γ)) by Lemma <ref>.(<ref>). Since the action of Γ̂ on _Ω(Γ) is cocompact, so is the action of i(Γ̂) on _𝒪(i(Γ̂)) = j(_Ω(Γ)). Therefore i(Γ̂) acts convex cocompactly on 𝒪, and so i(Γ̂) is convex cocompact in (V ⊕ V').Conversely, suppose that i(Γ̂) acts convex cocompactly on some nonempty properly convex open subset Ω' of (V⊕ V'). For any γ∈Γ̂, applying i(γ) to the direct sum V⊕ V' only changes the V factor. Therefore, for any sequence (γ_m)_m∈ of pairwise distinct elements of Γ̂, and any v ∈ V and v' ∈ V' such that [v + v'] ∈Ω', Lemma <ref> implies that the limit of [i(γ_m)(v + v')], if it exists, must lie in (V). This shows that _Ω'(i(Γ̂)) ⊂(V), hence _Ω'(i(Γ̂)) ⊂(V). It follows that Ω := Ω' ∩(V) is a nonempty, Γ̂-invariant, open, properly convex subset of (V) on which the action of Γ̂ is convex cocompact.§.§ <ref>: SemisimplificationWe start with a general observation.Let L and U be two subgroups of G=(V) such that * L∩ U={Id} and L normalizes U, so that the subgroup of (V) generated by L and U is a semidirect product L⋉ U;* there is a sequence (g_m)_m∈ in the centralizer of L in (V) such that g_m u g_m^-1→Id for all u∈ U.Let π : L⋉ U→ L be the natural projection. For any discrete group Γ and any representation ρ : Γ→ L⋉ U, if π∘ρ : Γ→ L is injective and if its image is convex cocompact in (V), then the same holds for ρ.If π∘ρ is injective, then so is ρ. The group π∘ρ(Γ) is a limit of (V)-conjugates of ρ(Γ) and convex cocompactness is open (property <ref> above) and invariant under conjugation. Therefore, if π∘ρ(Γ) is convex cocompact in (V), then so is ρ(Γ). Let Γ be a discrete group and ρ : Γ→(V) a representation. The Zariski closure H of ρ(Γ) in (V) admits a Levi decomposition H=L⋉ R_u(H) where R_u(H) is the unipotent radical of H and L is a reductive subgroup called a Levi factor. This decomposition is unique up to conjugation by R_u(H). We shall use the following terminology.The semisimplification of ρ is the composition ρ^ss of ρ with the projection onto L. The semisimplification ρ^ss : Γ→(V)=G does not depend, up to conjugation by R_u(H), on the choice of the Levi factor L. There is a sequence (g_m)_m∈ in the centralizer of L in H such that g_m u g_m^-1→Id for all u∈ R_u(H), and so ρ^ss is a limit of H-conjugates of ρ. If ρ(Γ) is discrete in (V), then so is ρ^ss(Γ) (see <cit.>). The group ρ^ss(Γ), like the Levi factor containing it, acts projectively on V in a semisimple way: namely, any invariant linear subspace of V admits an invariant complementary subspace.If the semisimplification ρ^ss : Γ→(V) is injective and its image ρ^ss(Γ) is convex cocompact in (V), then the same holds for ρ by Lemma <ref>. This proves <ref>, the last remaining property of Theorem <ref>.In the rest of this section, we study further operations that preserve convex cocompactness, strengthening properties <ref> and <ref> of Theorem <ref>.§.§ Convex cocompactness for cocycle deformationsLet V = V_1 ⊕ V_2, let Γ be a discrete group, and let ρ: Γ→^±(V_1) be a representation. A ρ-cocycle is a map u: Γ→(V_2, V_1) satisfying the cocycle conditionu(γ_1γ_2) = u(γ_1) + ρ(γ_1)∘ u(γ_2)for all γ_1,γ_2∈Γ. Any ρ-cocycle udefines a representation ρ^u: Γ→^±(V) = ^±(V_1⊕⁠ V_2) by the formula:ρ^u(γ)(v_1 ⊕ v_2) := (ρ(γ)· v_1 + u(γ)(v_2)) ⊕ v_2∀ v_1 ∈ V_1, v_2 ∈ V_2or in matrix notation: ρ^u(γ) = [ ρ(γ) u(γ);0 Id_V_2 ].In the case that u = 0 is the zero cocycle, the corresponding representation ρ^0 is simply the composition of ρ with the inclusion V_1 ↪ V. The representation ρ^u is conjugate to ρ^0 if and only if u : Γ→(V_2,V_1) is a ρ-coboundary: there exists Φ∈(V_2,V_1) such that u(γ) = Φ - ρ(γ)∘Φ for all γ∈Γ. If u is not a ρ-coboundary, then the representation ρ^u is not semisimple (not conjugate to its semisimplification from Definition <ref>), even if ρ is. Cocycles which are not coboundaries always exist if Γ is a free group. See also Example <ref> below. Let V = V_1 ⊕ V_2, let Γ be a discrete group, let ρ: Γ→^±(V_1) be a representation, let u: Γ→(V_2, V_1) be a ρ-cocycle, and let ρ^u: Γ→^±(V) be the associated representation. Then the following are equivalent: *ρ is injective and ρ(Γ) is convex cocompact in (V_1),*ρ^u is injective and ρ^u(Γ) is convex cocompact in (V). We first check the implication (<ref>) ⇒ (<ref>). Suppose ρ is injective and ρ(Γ) is convex cocompact in (V_1). Then ρ^u is clearly injective. The composition of ρ with the inclusion of V_1 into V = V_1 ⊕ V_2,γ⟼[ ρ(γ)0;0 Id_V_2 ]∈^±(V)is injective and by property <ref> of Theorem <ref> (proved in Section <ref> above), its image is convex cocompact in (V).It then follows that ρ^u(Γ) is convex cocompact in (V) by applying Lemma <ref> with L the block diagonal subgroup, U the block upper triangular subgroup, (g_m)_m∈ a sequence of block scalar matrices which contract V_1 relative to V_2.We now check the implication (<ref>) ⇒ (<ref>). Suppose ρ^u is injective and ρ^u(Γ) is convex cocompact in (V). By property <ref> of Theorem <ref> (proved in Section <ref> above), ρ^u(Γ) does not contain any unipotent element, hence ρ is also injective. Let Ω⊂(V) be a nonempty properly convex open set on which ρ^u(Γ) acts convex cocompactly. For any γ∈Γ, applying ρ^u(γ) to the direct sum V = V_1 ⊕ V_2 only changes the V_1 factor. Therefore, for any sequence (γ_m)_m∈ of pairwise distinct elements of Γ, and any v_1 ∈ V_1, v_2 ∈ V_2 such that [v_1 ⊕ v_2] ∈Ω, Lemma <ref> implies that the limit of [ρ(γ_m)(v_1 ⊕ v_2)], if it exists, must lie in (V_1). This shows that _Ω(ρ^u(Γ)) ⊂(V_1), hence _Ω(ρ^u(Γ)) ⊂(V_1). It follows that Ω_1 := Ω∩(V_1) is a nonempty, ρ^u(Γ)-invariant, open, properly convex subset of (V_1) on which the restricted action, namely ρ, of Γ is convex cocompact.We briefly examine the special case where V_1 = ^3, where V_2 = ^1, where Γ is isomorphic to the fundamental group of a closed orientable surface S_g of genus g ≥ 2, and where ρ: Γ→(V_1) is discrete, injective, with image in a special orthogonal group (2,1) of signature (2,1). Then ρ represents a point of the Teichmüller space 𝒯(S_g) and the space of cohomology classes of cocycles u : Γ→(V_2,V_1)≃ V identifies with the tangent space to 𝒯(S_g) at Γ (see <cit.>) and has dimension 6g-6. (1) By the implication (<ref>) ⇒ (<ref>) of Proposition <ref>, for any ρ-cocycle u, the associated representation ρ^u is injective and ρ^u(Γ) preserves and acts convex cocompactly on a nonempty properly convex open subset Ω of (^3 ⊕^1). The ρ^u(Γ)-invariant convex open set Ω_max⊃Ω given by Proposition <ref>.(<ref>) turns out to be properly convex as soon as u is not a coboundary. It can be described as follows. The group ρ^u(Γ) preserves an affine chart U ≃^3 of (^3 ⊕^1) and a flat Lorentzian metric on U. The action on U is not properly discontinuous. However, Mess <cit.> described two maximal globally hyperbolic domains of discontinuity, called domains of dependence, one oriented to the future and one oriented to the past. The domain Ω_max is the union of the two domains of dependence plus a copy of the hyperbolic plane ^2 in (^3⊕ 0). We have that _Ω_max(ρ^u(Γ)) ⊂∂^2 ⊂(^3 ⊕ 0) and _Ω_max(ρ^u(Γ)) ⊂^2. Note that Ω_max depends on u but _Ω_max(ρ^u(Γ)) does not.(2) By Theorem <ref>.<ref>, the group ρ^u(Γ) also acts convex cocompactly on a nonempty properly convex open subset Ω^* of the dual projective space ((^3 ⊕)^*). In this case the ρ^u(Γ)-invariant convex open set (Ω^*)_max⊃Ω^*, given by Proposition <ref>.(<ref>) applied to Ω^*, is not properly convex: it is the suspension HP^3 of the dual copy (^2)^* ⊂((^3)^*) of the hyperbolic plane with the point ((^1)^*). The convex set HP^3 ⊂((^3 ⊕)^*) is the projective model for half-pipe geometry, a transitional geometry lying between hyperbolic geometry and anti-de Sitter geometry (see <cit.>). Note that while (Ω^*)_max = HP^3 does not depend on u, the convex core _Ω^*(ρ^u(Γ)) does depend on u. The full orbital limit set _Ω^*(ρ^u(Γ)) = Ω^*∩∂HP^3 is not contained in a hyperplane if u is not a coboundary. Indeed, _Ω^*(ρ^u(Γ)) may be thought of as a rescaled limit of the collapsing convex cores for quasi-Fuchsian subgroups of (3,1) (or of (2,2)) which converge to the Fuchsian group ρ(Γ). This situation was described in some detail by Kerckhoff in various lectures during the period 2010-2012 about the work <cit.> (still in preparation); this included a lecture at the 2010 workshop on Geometry, topology, and dynamics of character varieties at the National University of Singapore.§.§ Convex cocompactness on quotient spacesHere is a consequence of Proposition <ref> and Property <ref> of Theorem <ref>.Let Γ be a discrete subgroup of ^±(V) acting trivially on some linear subspace V_0 of V. Then the following are equivalent: *Γ is convex cocompact in (V),*the induced representation Γ→^±(V/V_0) is injective and its image is convex cocompact in (V/V_0). Fix a complement W to V_0 in V so that V = W ⊕ V_0. In a basis ℬ respecting this decomposition, each element γ∈Γ has block matrix form[γ]_V_0 ⊕ W = [A_γ0; B_ γ Id_V_0 ].Note that the induced representation Γ→^±(V/V_0) is equivalent to the representation γ↦ A_γ on W_0.Consider the dual action of Γ on (V^*). There is a dual splitting V^* = W^* ⊕ V_0^*, where W^* and V_0^* are the subspaces of V^* consisting of the linear functionals that vanish on V_0 and W, respectively. The basis ℬ^* dual to the basis ℬ above respects this splitting. In this basis, the action of Γ on V^* has block matrix form[γ ]_W^* ⊕ V_0^* = [^tA_γ^tB_γ;0 Id_V_0^* ].Hence the dual action on V^* is of the form ρ^u where ρ: Γ→^±(V^*) is the dual action and u: Γ→(V_0^*, W^*) is the ρ-cocycle such that u(γ) has matrix ^tB_γ. By Property <ref> of Theorem <ref>, the group Γ is convex cocompact in (V) if and only if it is convex cocompact in (V^*). By Proposition <ref>, this is equivalent to the representation γ↦^tA_γ being injective with convex cocompact image in (W^*). By Property <ref> again, this is equivalent to the representation γ↦ A_γ being injective with convex cocompact image in (W). Since the representation γ↦ A_γ on W is equivalent to the induced representation Γ→^±(V/V_0), the proof is complete.§ CONVEX COCOMPACTNESS IN ^P,Q-1Fix p,q∈^*. Recall from Section <ref> that the projective space (^p+q) is the disjoint union of^p,q-1 = {[v]∈(^p,q) | ⟨ v,v⟩_p,q<0},of ^p-1,q = {[v]∈(^p,q) | ⟨ v,v⟩_p,q>0}, and of∂^p,q-1 = ∂^p-1,q = {[v]∈(^p,q) | ⟨ v,v⟩_p,q=0}.For instance, Figure <ref> shows(^4) = ^3,0⊔(∂^3,0 = ∂^2,1) ⊔^2,1and(^4) = ^2,1⊔(∂^2,1 = ∂^1,2) ⊔^1,2. As explained in <cit.> or <cit.>, the space ^p,q-1 has a natural (p,q)-invariant pseudo-Riemannian structure of constant negative curvature, for which the geodesic lines are the intersections of ^p,q-1 with projective lines in (^p+q). Such a line is called spacelike (lightlike, timelike) if it meets ∂_^p,q-1 in two (one, zero) points: see Figure <ref>.An element g∈(p,q) is proximal in (^p+q) (Definition <ref>) if and only if it admits a unique attracting fixed point ξ_g^+ in ∂^p,q-1, in which case we shall say that g is proximal in ∂^p,q-1. In particular, for a discrete subgroup Γ of (p,q)⊂(^p+q), the proximal limit set Λ_Γ of Γ in (^p+q) (Definition <ref>) is contained in ∂^p,q-1, and called the proximal limit set of Γ in ∂^p,q-1. In this section we prove Theorems <ref> and <ref>.For Theorem <ref>, the equivalences <ref> ⇔ <ref> ⇔ <ref> ⇔ <ref> ⇔ <ref> follow from the equivalences <ref> ⇔ <ref> ⇔ <ref> ⇔ <ref> ⇔ <ref> by replacing the symmetric bilinear form ⟨·,·⟩_p,q by -⟨·,·⟩_p,q≃⟨·,·⟩_q,p: see <cit.>. The implications (<ref>) ⇒ (<ref>), <ref> ⇒ <ref>, and <ref> ⇒ <ref> hold by definition. The implication <ref> ⇒ <ref> is contained in the forward implication of Theorem <ref>, which has been established in Section <ref>. Here we shall prove the implications (<ref>) ⇒ (<ref>) ⇒ (<ref>) and <ref> ⇒ <ref> ⇒ <ref> ⇒ <ref>. We note that (<ref>) ⇒ (<ref>) is contained in <ref> ⇒ <ref> and <ref> ⇒ <ref>, and that (<ref>) ⇒ (<ref>) implies <ref> ⇒ <ref> and <ref> ⇒ <ref>. §.§ Proof of the implication (<ref>) ⇒ (<ref>) in Theorem <ref>We start with the following general lemma.Let Γ be a discrete subgroup of (p,q) preserving a nonempty properly convex open subset Ω of (^p+q). If z∈_Ω is an extreme point of ∂Ω, then z^⊥ is a supporting hyperplane to Ω at z. In particular (Corollary <ref>), if the action of Γ on Ω is convex cocompact and if z∈_Ω is an extreme point of _Γ(Ω), then z^⊥ is a supporting hyperplane to Ω at z.Since z∈_Ω, there exist x∈Ω and (γ_m)∈Γ^ such that γ_m· x→ z. If z is an extreme point of ∂Ω, then we actually have γ_m· x→ z for all x∈Ω, by Lemma <ref>.(<ref>).Up to replacing the symmetric bilinear form ⟨·,·⟩_p,q by its negative, we may assume that p≥ q, so that (p,q) has real rank q. Choose a basis (e_1,…,e_p+q) of ^p+q in which the quadratic form ⟨·,·⟩_p,q takes the form 2 v_1 v_p+q + ⋯ + 2 v_q v_p+1 + v_q+1^2 + ⋯ + v_p^2. Similarly to Section <ref>, the Cartan decomposition(p,q) = Kexp(𝔞^+) K holds, whereK≃P((p)×(q)) is the stabilizer ofspan(e_1+e_p+q,…,e_q+e_p+1,e_q+1,…,e_p) ≃^p,0 andspan(e_1-e_p+q,…,e_p-e_q+1) ≃^0,q and where𝔞^+ := {[ {diag(t_1,…,t_q,0,…,0,-t_q,…,-t_1)  |  t_1≥…≥ t_q≥ 0} if p>q;; {diag(t_1,…,t_q,0,…,0,-t_q,…,-t_1)  |  t_1≥…≥ t_q-1≥ |t_q|} if p=q. ].This means that any g∈(p,q) may be written g = kexp(a)k' for some k,k'∈K and a unique a=diag(t_1,…,t_p+q)∈𝔞^+ with t_i = -t_p+q-i for all i; the real number t_i is the logarithm μ_i(g) of the i-th largest singular value of (any representative in (p,q) of) g.For any m we can write γ_m = k_m a_m k'_m ∈Kexp(𝔞^+) K. Up to extracting, we may assume that (k_m)_m∈ converges to some k∈K and (k'_m)_m∈ to some k'∈K. Arguing exactly as in the proof of Lemma <ref>, the fact that γ_m· x→ z for all x∈Ω implies that z = k· [e_1] and that (μ_1-μ_2)(γ_m)→ +∞. Therefore (μ_p+q-1-μ_p+q)(γ_m) = (μ_1-μ_2)(γ_m)→ +∞. As in the proof of Lemma <ref>.(<ref>), we then obtain that γ_m· x^*→ k·(span(e_1,…,e_p+q-1) = z^⊥ for all projective hyperplanes x^* of (^p+q) that do not contain k'^-1· [e_p+q]. In particular, γ_m· x^*→ z^⊥ for all x^* in a dense open subset of Ω^*, hence z^⊥∈_Ω^*(Γ) ⊂∂Ω^*. This implies that z^⊥ is a supporting hyperplane to Ω at z.Suppose Γ acts convex cocompactly on some nonempty properly convex open subset Ω of (^p+q). Let us prove that Γ is strongly convex cocompact in (^p+q).It is sufficient to check that _Ω(Γ) is contained in ^p,q-1 or in (^p+q) ∖^p,q-1≃^q,p-1. Indeed, suppose this is the case. By Corollaries <ref> and <ref>.(<ref>), we have _Ω(Γ) = _Ω(Γ) ⊂∂^p,q-1. Therefore, if _Ω(Γ) contained a PET, with vertices [a], [b], [c], then the edges of the PET would lie in ∂^p,q-1 and we would have ⟨ a,b⟩_p,q = ⟨ b,c⟩_p,q = ⟨ c,a⟩_p,q = 0. Therefore the projective plane (span(a,b,c)) would be entirely contained in ∂^p,q-1: contradiction since the PET is contained in this projective plane and in _Ω(Γ). This shows that if _Ω(Γ) is contained in ^p,q-1 or in (^p+q) ∖^p,q-1≃^q,p-1, then _Ω(Γ) does not contain any PET, and so Γ is strongly convex cocompact in (^p+q) by Theorem <ref>.Let us now show that _Ω(Γ) is contained in ^p,q-1 or in (^p+q) ∖^p,q-1≃^q,p-1.Up to replacing Γ by an index-two subgroup, we may assume that it lifts to a subgroup of (p,q), still denoted by Γ. Then the Γ-invariant properly convex open subset Ω of (^p+q) lifts to a Γ-invariant properly convex open cone Ω of ^p+q, and _Ω(Γ) lifts to a Γ-invariant closed cone Λ^𝗈𝗋𝖻_Ω(Γ) of ^p+q contained in the boundary of Ω.Let z∈_Ω(Γ) be an extreme point of _Γ(Ω). Let Λ be the closure of the Γ-orbit of z in ∂Ω, and let Λ be the preimage of Λ in Λ^𝗈𝗋𝖻_Ω(Γ).Firstly, we claim that the setS := {⟨ỹ,ỹ'⟩_p,q | ỹ∈Λ, ỹ'∈Λ^𝗈𝗋𝖻_Ω(Γ)}is contained in _≥ 0 or in _≤ 0, and cannot be reduced to {0}. Indeed, we know from Lemma <ref> that z^⊥ is a supporting hyperplane to Ω at z. Therefore the set {⟨z̃,ỹ'⟩_p,q |ỹ'∈⁠Λ^𝗈𝗋𝖻_Ω(Γ)} (and more generally the set {⟨z̃,ỹ'⟩_p,q | ỹ'∈Ω}) is contained in _≥ 0 or in _≤ 0. This set is equal to {⟨ỹ,ỹ'⟩_p,q | ỹ'∈Λ^𝗈𝗋𝖻_Ω(Γ)} for any ỹ∈Γ·z̃, hence for any ỹ∈Λ. Therefore it is equal to S. This set is not {0}, for otherwise _Ω(Γ), hence _Ω(Γ), would be contained in z^⊥, contradicting the fact that z^⊥ is a supporting hyperplane to Ω at z.Since S≠{0}, we can find z'∈_Ω(Γ) which is an extreme point of _Γ(Ω) and which lifts to z̃'∈Λ^𝗈𝗋𝖻_Ω(Γ) with ⟨z̃,z̃'⟩_p,q≠ 0. Let Λ' be the closure of the Γ-orbit of z' in ∂Ω, and letbe the convex hull of Λ∪Λ' inside Ω: it is a Γ-invariant closed convex subset of Ω.We claim thatis nonempty. Indeed, otherwise the convex hull of Λ∪Λ' in Ω would be contained in ∂Ω, hence in _Ω(Γ) = _Ω(Γ), hence in ∂^p,q-1 by Corollary <ref>. This would contradict the fact that ⟨z̃,z̃'⟩_p,q≠ 0. Thereforeis nonempty.We claim thatdoes not meet ∂^p,q-1. Indeed, consider x ∈ (∩∂^p,q-1) ∖ (Λ∪Λ'). Let Λ' be the preimage of Λ' in Λ^𝗈𝗋𝖻_Ω. Since x ∈∖ (Λ∪Λ'), we can lift it to x̃ = ∑_i=1^m t_i ỹ_i + ∑_i=1^m' t'_i ỹ'_i, where 1≤ m,m'≤ p+q and where t_i,t'_i>0 and ỹ_i∈Λ and ỹ'_i∈Λ' for all i. Since x ∈∂^p,q-1 and Λ,Λ' ⊂∂^p,q-1, we have0 = ⟨x̃,x̃⟩_p,q = ∑_1≤ i≤ m, 1≤ j≤ m' t_i t'_j⟨ỹ_i,ỹ'_j⟩_p,q.Moreover, all the ⟨ỹ_i,ỹ'_j⟩_p,q belong to S, which is contained in _≥ 0 or in _≤ 0. Therefore we must have ⟨ỹ_i,ỹ'_j⟩_p,q = 0 for all i,j. In particular, x ∈ y_1^⊥, where y_1∈Λ is the projection of ỹ_1∈Λ. But we have seen that y_1^⊥ is a supporting hyperplane to Ω at y_1 (by Lemma <ref>). Therefore x cannot be contained in ⊂Ω. This shows thatdoes not meet ∂^p,q-1.We deduce thatis contained in ^p,q-1 or in (^p+q) ∖^p,q-1≃^q,p-1.By minimality of _Ω(Γ) (Lemma <ref>), we have = _Ω(Γ). Thus _Ω(Γ) is contained in ^p,q-1 or in (^p+q) ∖^p,q-1≃^q,p-1, which completes the proof. §.§ Proof of the implication <ref> ⇒ <ref> in Theorem <ref>Suppose that Γ is word hyperbolic, that the natural inclusion Γ↪(p,q) is P_1^p,q-Anosov, and that the proximal limit set Λ_Γ⊂∂^p,q-1 is negative.By definition of negativity, the set Λ_Γ lifts to a cone Λ_Γ of ^p,q∖{ 0} on which all inner products ⟨·,·⟩_p,q of noncollinear points are negative.The setΩ_max := ({ v∈^p,q | ⟨ v,v'⟩_p,q<0∀ v'∈Λ_Γ})is a connected component of (^p,q) ∖⋃_z∈Λ_Γ z^⊥ which is open and convex. It is nonempty because it contains [v_1+v_2] for any noncollinear v_1,v_2∈Λ_Γ. Note that if Λ_Γ is another cone of ^p,q∖{0} lifting Λ_Γ on which all inner products ⟨·,·⟩_p,q of noncollinear points are negative, then either Λ_Γ = Λ_Γ or Λ_Γ = -Λ_Γ (using negativity). Thus Ω_max is well defined independently of the lift Λ_Γ, and so Ω_max is Γ-invariant because Λ_Γ is.By Proposition <ref>, the group Γ acts convex cocompactly on some non­empty, properly convex open subset Ω⊂Ω_max, and _Ω(Γ)=Λ_Γ. In particular, the convex hull _Ω(Γ) of Λ_Γ in Ω has compact quotient by Γ.As in <cit.>, this convex hull _Ω(Γ) is contained in ^p,q-1: indeed, any point of _Ω(Γ) lifts to a vector of the form v = ∑_i=1^k t_i v_i with k≥ 2, where v_1, …, v_k ∈Λ_Γ are distinct and t_1, …, t_k >0; for all i≠ j we have ⟨ v_i,v_i⟩_p,q=0 and ⟨ v_i,v_j⟩_p,q<0 by negativity of Λ_Γ, hence ⟨ v,v⟩_p,q<0.Since _Ω(Γ) has compact quotient by Γ, it is easy to check that for small enough r > 0 the open uniform r-neighborhood 𝒰 of _Ω(Γ) in (Ω,d_Ω) is contained in ^p,q-1 and properly convex (see <cit.>). The full orbital limit set _𝒰(Γ) is a nonempty closed Γ-invariant subset of _Ω(Γ), hence its convex hull _𝒰(Γ) in 𝒰 is a nonempty closed Γ-invariant subset of _Ω(Γ). (In fact _𝒰(Γ)=_Ω(Γ) by Lemma <ref>.) The nonempty set _𝒰(Γ) has compact quotient by Γ. Thus Γ acts convex cocompactly on 𝒰⊂^p,q-1, which completes the proof. §.§ Proof of the implication <ref> ⇒ <ref> in Theorem <ref>Suppose that Γ acts convex cocompactly on some nonempty properly convex open subset Ω of ^p,q-1. By the implication (<ref>) ⇒ (<ref>) of Theorem <ref> (proved in Section <ref> above), Γ is strongly convex cocompact, and so Λ_Γ is transverse by the forward implication of Theorem <ref> (proved in Section <ref> above).The set Λ_Γ is contained in _Ω(Γ). By Lemma <ref> and Corollary <ref>.(<ref>), the convex hull of Λ_Γ in Ω is the convex hull _Ω(Γ) of _Ω(Γ) in Ω, and the ideal boundary _Ω(Γ) is equal to _Ω(Γ). By Corollary <ref>, this set is contained in ∂^p,q-1. It follows that Λ_Γ = _Ω(Γ): indeed, otherwise _Ω(Γ) would contain a nontrivial segment between two points of Λ_Γ, this segment would be contained in ∂^p,q-1, and this would contradict that Λ_Γ is transverse. Thus _Ω(Γ) is a closed properly convex subset of ^p,q-1 on which Γ acts properly discontinuously and cocompactly, and whose ideal boundary does not contain any nontrivial projective line segment.It could be the case that _Ω(Γ) has empty interior. However, for any r > 0 the closed uniform r-neighborhood _r of _Ω(Γ) in (Ω,d_Ω) has nonempty interior, and is still properly convex with compact quotient by Γ. By Corollary <ref>.(<ref>), we have _r = _Ω(Γ), hence _r does not contain any nontrivial projective line segment. This shows that Γ is ^p,q-1-convex cocompact, which completes the proof. §.§ Proof of the implication (<ref>) ⇒ (<ref>) in Theorem <ref> The proof relies on the following proposition, which is stated in <cit.> for Γ acting irreducibly on (^p+q). The proof for general Γ is literally the same; we recall it for the reader's convenience.For p,q∈^*, let Γ be a discrete subgroup of (p,q) preserving a nonempty properly convex open subset Ω of (^p,q). Let Λ_Γ⊂∂^p,q-1 be the proximal limit set of Γ (Definition <ref> and Remark <ref>). If Λ_Γ contains at least two points and is transverse, and if the action of Γ on Λ_Γ is minimal (every orbit is dense), then Λ_Γ is negative or positive. Recall from Remark <ref> that if the action of Γ on (V) is irreducible and Λ_Γ≠∅, then the action of Γ on Λ_Γ is always minimal. Suppose that Λ_Γ contains at least two points and is transverse, and that the action of Γ on Λ_Γ is minimal. By Proposition <ref>.(<ref>), the sets Ω and Λ_Γ lift to cones Ω and Λ_Γ of V∖{0} with Ω properly convex containing Λ_Γ in its boundary, and Ω^* and Λ_Γ^* lift to cones Ω^* and Λ_Γ^* of V^*∖{0} with Ω^* properly convex containing Λ_Γ^* in its boundary, such that φ(v)≥ 0 for all v∈Λ_Γ and φ∈Λ_Γ^*. By Remark <ref>, the group Γ lifts to a discrete subgroup Γ̂ of (p,q) preserving Ω (hence also Λ_Γ, Ω^*, and Λ_Γ^*). Note that the map ψ : v↦⟨ v,·⟩_p,q from ^p,q to (^p,q)^* induces a homeomorphism Λ_Γ≃Λ_Γ^*. For any v∈Λ_Γ we have ψ(v)∈Λ_Γ^*∪ -Λ_Γ^*. Let F^+ (F^-) be the subcone of Λ_Γ consisting of those vectors v such that ψ(v)∈Λ_Γ^* (ψ(v)∈ -Λ_Γ^*). By construction, we have v∈ F^+ if and only if ⟨ v,v'⟩_p,q≥ 0 for all v'∈Λ_Γ; in particular, F^+ is closed in Λ_Γ and Γ̂-invariant. Similarly, F^- is closed and Γ̂-invariant. The sets F^+ and F^- are disjoint since Λ_Γ contains at least two points and is transverse. Thus F^+ and F^- are disjoint, Γ̂-invariant, closed subcones of Λ_Γ, whose projections to (^p,q) are disjoint, Γ-invariant, closed subsets of Λ_Γ. Since the action of Γ on Λ_Γ is minimal, Λ_Γ is the smallest nonempty Γ-invariant closed subset of (^p,q), and so {F^+, F^-}={Λ_Γ, ∅}. If Λ_Γ=F^+ then Λ_Γ is nonnegative, hence positive by transversality. Similarly, if Λ_Γ=F^- then Λ_Γ is negative.Suppose Γ is strongly convex cocompact in (^p+q). By the implication <ref> ⇒ <ref> of Theorem <ref>, the group Γ is word hyperbolic and the natural inclusion Γ↪(p,q) is P_1^p,q-Anosov. If #Λ_Γ> 2, then the action of Γ on ∂_∞Γ, hence on Λ_Γ, is minimal, and so Proposition <ref> implies that the set Λ_Γ is negative or positive. This last conclusion also holds, vacuously, if #Λ_Γ≤ 2. Thus the implications <ref> ⇒ <ref> and <ref> ⇒ <ref> of Theorem <ref> (proved in Sections <ref> and <ref> just above) show that Γ is ^p,q-1-convex cocompact or ^q,p-1-convex cocompact.§.§ Proof of the implication <ref> ⇒ <ref> in Theorem <ref>Suppose Γ⊂(p,q) is ^p,q-1-convex cocompact, it acts properly discontinuously and cocompactly on a closed convex subsetof ^p,q-1 such thathas nonempty interior anddoes not contain any nontrivial projective line segment. We shall first show that Γ satisfies condition <ref> of Theorem <ref>: namely, Γ preserves a nonempty properly convex open subset Ω of (^p,q) and acts cocompactly on a closed convex subset ' of Ω with nonempty interior, such that ' does not contain any nontrivial segment. One naive idea would be to take Ω=() and ' to be the convex hull _0 ofin  (or some small thickening), but _0 might not be contained in () (if =_0). So we must find a larger open set Ω. Lemma <ref> below implies that the Γ-invariant convex open subset Ω_max⊃() of Proposition <ref> contains . If Ω_max is properly convex (if the action of Γ on (V) is irreducible), then we may take Ω=Ω_max and '=. However, Ω_max might not be properly convex; we shall show (Lemma <ref>) that the Γ-invariant properly convex open set Ω = ()^* (realized in the same projective space via the quadratic form) contains _0 (though possibly not ) and we shall take ' to be the intersection ofwith a neighborhood of _0 in Ω.The following key observation is similar to <cit.>.For p,q∈^*, let Γ be a discrete subgroup of (p,q) acting properly discontinuously and cocompactly on a nonempty properly convex closed subsetof ^p,q-1. Thendoes not meet any hyperplane z^⊥ with z∈. In other words, any point of  sees any point ofin a spacelike direction.Suppose by contradiction thatmeets z^⊥ for some z∈. Then z^⊥ contains a ray [y,z) ⊂. Let (x_m)_m∈ be a sequence of points of [y,z) converging to z (see Figure <ref>). Since Γ acts cocompactly on , for any m there exists γ_m∈Γ such that γ_m· x_m belongs to a fixed compact subset of . Up to taking a subsequence, the sequences (γ_m· x_m)_m and (γ_m· y)_m and (γ_m· z)_m converge respectively to some points x_∞, y_∞, z_∞ in (^p,q). We have x_∞∈ and y_∞∈ (because the action of Γ on  is properly discontinuous) and z_∞∈ (because =()∩∂^p,q-1 is closed in (^p,q)). The segment [y_∞,z_∞] is contained in z_∞^⊥, hence its intersection with ^p,q-1 is contained in a lightlike geodesic and can meet ∂^p,q-1 only at z_∞. Therefore y_∞=z_∞ and the closure ofin (^p,q) contains a full projective line, contradicting the proper convexity of .In the setting of Lemma <ref>, supposehas nonempty interior. Then the convex hull _0 ofin  is contained in the Γ-invariant properly convex open setΩ := { x∈(^p,q)  |  x^⊥∩ = ∅}.Note that Ω is the dual of () realized in ^p,q (rather than (^p,q)^*) via the symmetric bilinear form ⟨·,·⟩_p,q. The properly convex set ⊂^p,q-1 lifts to a properly convex coneof ^p,q∖{0} such that ⟨ v,v⟩_p,q<0 for all v∈. We denote by _0⊂ the preimage of _0. The ideal boundarylifts to the intersection ⊂^p,q∖{0} of the closureofwith the null cone of ⟨·,·⟩_p,q minus {0}. For any v ∈ and v' ∈, we have ⟨ v,v' ⟩_p,q≤ 0: indeed, this is easily seen by considering tv+v' ∈^p,q, which for small t>0 must belong tohence have nonpositive norm; see also <cit.>. By Lemma <ref> we have in fact ⟨ v, v'⟩_p,q < 0 for all v∈ and v'∈. In particular, this holds for v∈_0. Now consider v∈_0 and v' ∈. Since _0 is the convex hull ofin , we may write v = ∑_i=1^k t_i v_i where v_1, …, v_k ∈ and t_1, …, t_k >0. By Lemma <ref> we have ⟨ v_i,v' ⟩_p,q < 0 for all i, hence ⟨ v, v'⟩_p,q < 0. This proves that _0 is contained in Ω. The open set Ω is properly convex becausehas nonempty interior. In the setting of Lemma <ref>, the group Γ acts cocompactly on some closed properly convex subset ' of Ω with nonempty interior which is contained in ⊂^p,q-1, with ' =.Since the action of Γ on _0 is cocompact, it is easy to check that for any small enough r > 0 the closed uniform r-neighborhood _r of _0 in (Ω, d_Ω) is contained in ^p,q-1 (see <cit.>). The set ' := _r ∩ is then a closed properly convex subset of Ω with nonempty interior, and Γ acts properly discontinuously and cocompactly on '. Since ' is also a closed subset of ^p,q-1, we have ' = '∩∂^p,q-1 = _r ∩ =. Ifdoes not contain any nontrivial segment, then neither does ' for ' as in Lemma <ref>. This proves that if Γ is ^p,q-1-convex cocompact (satisfies <ref> of Theorem <ref>, (<ref>) up to switching p and q), then it satisfies condition <ref> of Theorem <ref>, as announced.Now, Theorem <ref> states that <ref> is equivalent to strong convex cocompactness: this yields the implication (<ref>) ⇒ (<ref>) of Theorem <ref>. Condition <ref> of Theorem <ref>, equivalent to <ref>, also says that the group Γ is word hyperbolic and that the natural inclusion Γ↪(p,q) is P_1^p,q-Anosov. Since in addition the proximal limit set Λ_Γ is the ideal boundary ', we find that Λ_Γ is negative. This completes the proof of the implication <ref> ⇒ <ref> in Theorem <ref>.§.§ Proof of Theorem <ref> The implications (<ref>) ⇒ (<ref>) ⇒ (<ref>) of Theorem <ref> hold trivially. We now prove (<ref>) ⇔ (<ref>) ⇒ (<ref>). We start with the following observation.For p,q∈^*, let Γ be an infinite discrete subgroup of (p,q) acting properly discontinuously and cocompactly on a closed convex subset of ^p,q-1 with nonempty interior. Thenhas bisaturated boundary if and only ifdoes not contain any infinite geodesic line of ^p,q-1.Supposecontains an infinite geodesic line of ^p,q-1. Sinceis properly convex and closed in ^p,q-1, this line must meet ∂^p,q-1 in a point of , and sodoes not have bisaturated boundary.Conversely, supposedoes not have bisaturated boundary. Since =()∩∂^p,q-1 is closed in (^p,q), there exists a ray [y,z) ⊂ terminating at a point z ∈. Let (a_m)_m∈ be a sequence of points of [y,z) converging to z (see Figure <ref>). Since Γ acts cocompactly on , for any m there exists γ_m∈Γ such that γ_m· a_m belongs to a fixed compact subset of . Up to taking a subsequence, the sequences (γ_m· a_m)_m and (γ_m· y)_m and (γ_m· z)_m converge respectively to some points a_∞, y_∞, z_∞ in (V). We have a_∞∈ and y_∞∈ (because the action of Γ on  is properly discontinuous) and z_∞∈ (becauseis closed). Moreover, a_∞∈ (y_∞,z_∞). Thus (y_∞,z_∞) is an infinite geodesic line of ^p,q-1 contained in . Suppose condition (<ref>) of Theorem <ref> holds, Γ<(p,q) is ^p,q-1-convex cocompact. By Corollary <ref>, the group Γ preserves a properly convex open subset Ω of (^p,q) and acts cocompactly on a closed convex subset ' of Ω with nonempty interior which is contained in ^p,q-1, and whose ideal boundary ' does not contain any nontrivial segment. As in the proof of Corollary <ref>, for small enough r > 0 the closed uniform r-neighborhood '_r of ' in (Ω, d_Ω) is contained in ^p,q-1. By Lemma <ref> and Corollary <ref>.(<ref>), we have '_r = ' = _Ω(Γ), hence '_r has bisaturated boundary by Lemma <ref>. In particular, '_r does not contain any infinite geodesic line of ^p,q-1 by Lemma <ref>. Thus condition (<ref>) of Theorem <ref> holds, with _𝖻𝗂𝗌𝖺𝗍 = '_r.Conversely, suppose condition (<ref>) of Theorem <ref> holds, Γ acts properly discontinuously and cocompactly on a closed convex subset _𝖻𝗂𝗌𝖺𝗍 of ^p,q-1 such that _𝖻𝗂𝗌𝖺𝗍 does not contain any infinite geodesic line of ^p,q-1. By Lemma <ref>, the set _𝖻𝗂𝗌𝖺𝗍 has bisaturated boundary. By Corollary <ref>, the group Γ acts convex cocompactly on Ω := (_𝖻𝗂𝗌𝖺𝗍) and _Ω(Γ) = _𝖻𝗂𝗌𝖺𝗍. By Lemma <ref>, the group Γ acts cocompactly on a closed convex subset _𝗌𝗆𝗈𝗈𝗍𝗁⊃_Ω(Γ) of Ω whose nonideal boundary is strictly convex and C^1, and _𝗌𝗆𝗈𝗈𝗍𝗁 = _Ω(Γ) ⊂∂^p,q-1 (see Corollary <ref>.(<ref>)). In particular, _𝗌𝗆𝗈𝗈𝗍𝗁 is closed in ^p,q-1 and condition (<ref>) of Theorem <ref> holds. Since _𝖻𝗂𝗌𝖺𝗍 has bisaturated boundary, any inextendable segment in _𝖻𝗂𝗌𝖺𝗍 is inextendable in ∂Ω = _𝖻𝗂𝗌𝖺𝗍∪_𝖻𝗂𝗌𝖺𝗍. By Lemma <ref>, if _𝗌𝗆𝗈𝗈𝗍𝗁 = _𝖻𝗂𝗌𝖺𝗍 contained a nontrivial segment, then _𝗌𝗆𝗈𝗈𝗍𝗁 would contain a PET, but that is not possible since closed subsets of ^p,q-1 do not contain PETs; see Remark <ref> below. Therefore _𝗌𝗆𝗈𝗈𝗍𝗁 = _𝖻𝗂𝗌𝖺𝗍 does not contain any nontrivial segment, and so Γ is ^p,q-1-convex cocompact, condition (<ref>) of Theorem <ref> holds. This concludes the proof of Theorem <ref>.In the proof we have used the following elementary observation.For p,q∈^* with p+q≥ 3, a closed subset of ^p,q-1 cannot contain a PET. Indeed, any triangle of (^p,q) whose edges lie in ∂^p,q-1 must have interior in ∂^p,q-1, because in this case the symmetric bilinear form is zero on the projective span.§.§ ^p,q-1-convex cocompact groups whose boundary is a (p-1)-sphere In what follows we use the term standard (p-1)-sphere in ∂^p,q-1 to mean the intersection with ∂^p,q-1 of the projectivization of a (p+1)-dimensional subspace of ^p+q of signature (p,1); by an embedding we mean a map which is a homeomorphism onto its image.For p,q≥ 1, let Γ be an infinite discrete subgroup of (p,q) which is ^p,q-1-convex cocompact, acting properly discontinuously and cocompactly on some closed convex subsetof ^p,q-1 with nonempty interior whose ideal boundary ⊂∂^p,q-1 does not contain any nontrivial projective line segment. Let ∂_∞Γ be the Gromov boundary of Γ, and let 𝒮 be any standard (p-1)-sphere in ∂^p,q-1. Then *the Γ-equivariant boundary map ξ : ∂_∞Γ→∂^p,q-1, which is an embedding with image , is homotopic to an embedding of ∂_∞Γ whose image is contained in the (p-1)-sphere 𝒮.Suppose ∂_∞Γ is homeomorphic to a (p-1)-dimensional sphere and p≥ q. Then *the unique maximal Γ-invariant convex open subset Ω_max of (^p,q) containing  (see Proposition <ref>) is contained in ^p,q-1;*any supporting hyperplane of  at a point ofis the projectivization of a linear hyperplane of ^p,q of signature (p,q-1).In particular, if q = 2 and ∂_∞Γ is homeomorphic to a (p-1)-dimensional sphere with p≥ 2, then any hyperplane tangent tois spacelike.Lemma <ref>.(<ref>) has the following consequences. Let p,q≥ 1. *Let Γ be an infinite discrete subgroup of (p,q) which is ^p,q-1-convex cocompact. Then Γ is Gromov hyperbolic and has virtual cohomological dimension ≤ p. Moreover, the virtual cohomological dimension of Γ is p if and only if the Gromov boundary ∂_∞Γ is homeomorphic to a (p-1)-dimensional sphere.*Let Γ be a word hyperbolic group with connected Gromov boundary ∂_∞Γ and virtual cohomological dimension >q. If there is a P_1^p,q-Anosov representation ρ : Γ→(p,q), then p>q and the group ρ(Γ) is ^p,q-1-convex cocompact. (<ref>) By Theorem <ref>, the group Γ is Gromov hyperbolic. By Lemma <ref>.(<ref>), the Gromov boundary ∂_∞Γ embeds into 𝕊^p-1; in particular, it has Lebesgue covering dimension ≤ p-1. This Lebesgue covering dimension is equal to the virtual cohomological dimension of Γ minus one by <cit.>. If the Lebesgue covering dimension of ∂_∞Γ is p-1, then the small inductive dimension (or Menger–Urysohn dimension) of ∂_∞Γ is also p-1 by Urysohn's theorem (see <cit.>), and so an embedding of ∂_∞Γ in 𝕊^p-1 must have nonempty interior (see <cit.>). Therefore ∂_∞Γ is homeomorphic to 𝕊^p-1 by <cit.>.(<ref>) By Corollary <ref>, since ∂_∞Γ is connected, the group ρ(Γ) is ^p,q-1-convex cocompact or ^q,p-1-convex cocompact. We conclude using (<ref>).We work in an affine chart 𝔸 containing , and choose coordinates (v_1, …, v_p+q) on ^p,q so that the quadratic form ⟨·,·⟩_p,q takes the usual form v_1^2 + ⋯ + v_p^2 - v_p+1^2 - ⋯ - v_p+q^2, the affine chart 𝔸 is defined by v_p+q≠ 0, and our chosen standard (p-1)-sphere in ∂^p,q-1 is𝒮 = { [(v_1, …,v_p, 0, …, 0, v_p+q)] ∈(^p,q)  |  v_1^2 + ⋯ + v_p^2 = v_p+q^2}. For any 0≤ t≤ 1, consider the map f_t : 𝔸→𝔸 sending [(v_1, …, v_p+q)] to[(v_1, …, v_p, √(1-t) v_p+1, …, √(1-t) v_p+q-1, √(1 + t α_(v_p+1,…,v_p+q))v_p+q)],where α_(v_p+1,…,v_p+q) = (v_p+1^2 + ⋯ + v_p+q-1^2)/v_p+q^2. Then (f_t)_0≤ t≤ 1 restricts to a homotopy of maps 𝔸∩∂^p,q-1→𝔸∩∂^p,q-1 from the identity map to the map f_1|_𝔸∩∂^p,q-1, which has image 𝒮⊂(^p,1). Thus (f_t∘ξ)_0≤ t≤ 1 defines a homotopy between ξ : ∂_∞Γ→∂^p,q-1 and the continuous map f_1∘ξ : ∂_∞Γ→∂^p,q-1 whose image lies in 𝒮. By the Cauchy–Schwarz inequality for Euclidean inner products, two points of 𝔸∩∂^p,q-1 which have the same image under f_1 are connected by a line segment in 𝔸 whose interior lies outside of ^p,q-1. Sinceis contained in ^p,q-1, we deduce that the restriction of f_1 tois injective. Since ∂_∞Γ is compact, we obtain that f_1∘ξ : ∂_∞Γ→𝒮 is an embedding. This proves (<ref>).Henceforth we assume that ∂_∞Γ is homeomorphic to a (p-1)-dimensional sphere and that p≥ q. Let Λ_Γ = ξ(∂_∞Γ) ⊂∂^p,q-1 be the proximal limit set of Γ in (^p+q). The map f_1 : Λ_Γ→𝒮 is then an embedding of a compact (p-1)-manifold into a connected (p-1)-manifold, and hence by the Invariance of Domain Theorem of Brouwer, it is a homeomorphism.Let us prove (<ref>). By definition of Ω_max (see Proposition <ref>), it is sufficient to prove that any point of ∂^p,q-1 is contained in z^⊥ for some z∈Λ_Γ. Therefore it is sufficient to prove that Λ_Γ intersects (W) for every maximal totally isotropic subspace W of ^p,q. Let W be such a subspace. We work in the coordinates from above. Up to changing the first p coordinates by applying an element of (p), we may assume that in the splitting^p,q = ^p-q,0⊕^q, 0⊕^0,qdefined by these coordinates, the space W is {(0,v',-v')|v' ∈^q}. Consider the map φ : 𝕊^p-1→𝕊^q-1 sending a unit vector (v,v') ∈^p,0 = ^p-q,0⊕^q,0 toφ(v,v') = π_q ∘ (f_1|_Λ_Γ)^-1(v,v',a),where a = (0,…,0,1) ∈^0,q and π_q: ^p,q→^0,q is the projection onto the last q coordinates. Then φ is homotopic to a constant map (namely the map sending all points to a) via the homotopy φ_t = π_q ∘ f_t ∘ (f_1|_Λ_Γ)^-1. Now consider the restriction ψ: 𝕊^q-1→𝕊^q-1 of φ to the unit sphere in ^q,0, which is also homotopic to a constant map via the restriction of the homotopy φ_t. If we had Λ_Γ∩(W) = ∅, then we would have ψ(v') = φ(0,v') ≠ -v' for all v' ∈𝕊^q-1⊂^q, andψ_t(v') =(1-t) ψ(v') + tv'/(1-t) ψ(v') + tv'would define a homotopy from ψ to the identity map on 𝕊^q-1, showing that a constant map is homotopic to the identity map: contradiction since 𝕊^q-1 is not contractible.This completes the proof of (<ref>): namely, Ω_max⊂^p,q-1. We note that ∂Ω_max∩∂^p,q-1 = Λ_Γ, since we saw that f_1 maps ∂Ω_max∩∂^p,q-1 injectively to 𝒮.Finally, we prove (<ref>). The dual convex ()^* to () naturally identifies, via ⟨·,·⟩_p,q, with a properly convex subset of (^p+q) which must be contained in Ω_max⊂^p,q-1. By Lemma <ref>, we have ∩ z^⊥ = ∅ for any z ∈ = Λ_Γ, hence ⊂Ω_max.Hence, a projective hyperplane y^⊥ supportingat some point ofis dual to a point y ∈Ω_max∖Λ_Γ, which is contained in ^p,q-1 by the previous paragraph. Hence ⟨ y, y ⟩_p,q < 0 and so y^⊥ is the projectivization of a linear hyperplane of signature (p,q-1).§.§ ^p,1-convex cocompactness and global hyperbolicity Recall that a Lorentzian manifold M is said to be globally hyperbolic if it is causal (contains no timelike loop) and for any two points x,x'∈ M, the intersection J^+(x)∩ J^-(x') is compact (possibly empty). Here we denote by J^+(x) (J^-(x)) the set of points of M which are seen from x by a future-pointing (past-pointing) timelike or lightlike geodesic. Equivalently <cit.>, the Lorentzian manifold M admits a Cauchy hypersurface, an achronal subset which intersects every inextendible timelike curve in exactly one point.We make the following observation; it extends <cit.>, which focused on the case that Γ⊂(p,2) is isomorphic to a uniform lattice of (p,1). The case considered here is a bit more general, see <cit.>. We argue in an elementary way, without using the notion of CT-regularity. For p≥ 2, let Γ be a torsion-free infinite discrete subgroup of (p,2) which is ^p,1-convex cocompact and whose Gromov boundary ∂_∞Γ is homeomorphic to a (p-1)-dimensional sphere. For any nonempty properly convex open subset Ω of ^p,1 on which Γ acts convex cocompactly, the quotient M = Γ\Ω is a globally hyperbolic Lorentzian manifold.Consider two points x,x'∈ M = Γ\Ω. There exists R>0 such that any lifts y,y'∈Ω of x, x' belong to the uniform R-neighborhoodof _Ω(Γ) in Ω for the Hilbert metric d_Ω. Let J^+(y) (J^-(y)) be the set of points of Ω which are seen from y by a future-pointing (past-pointing) timelike or lightlike geodesic. By Lemma <ref>.(<ref>), all supporting hyperplanes ofat points ofare spacelike. We may decomposeinto two disjoint open subsets, namely the subset ^+ of points for which the outward pointing normal to a supporting plane is future pointing, and the subset ^- of points for which it is past pointing. Indeed, ^+ and ^- are the two path connected components of the complementof the embedded (p-1)-sphere Λ_Γ in the p-sphere (). The set Ω∖ similarly has two components, a component to the future of ^+ and a component to the past of ^-.Any point of J^+(y) ∩ (Ω∖) lies in the future component of Ω∖ and similarly, any point of J^-(y') ∩ (Ω∖) lies in the past component of Ω∖.By proper discontinuity of the Γ-action on , in order to check that J^+(x)∩ J^-(x') is compact in M, it is therefore enough to check that J^+(y)∩ and J^-(y')∩ are compact in Ω. This follows from the fact that the ideal boundary of  is _Ω(Γ) (Corollary <ref>.(<ref>)) and that any point of Ω sees any point of _Ω(Γ) in a spacelike direction (Lemma <ref>).§.§ Examples of ^p,q-1-convex cocompact groups§.§.§ Quasi-Fuchsian ^p,q-1-convex cocompact groupsLet H be a real semisimple Lie group of real rank 1 and τ : H→(p,q) a representation whose image contains an element which is proximal in ∂^p,q-1 (see Remark <ref>). By <cit.>, for any word hyperbolic group Γ and any (classical) convex cocompact representation σ_0 : Γ→ H, *the composition ρ_0 := τ∘σ_0 : Γ→(p,q) is P_1^p,q-Anosov and the proximal limit set Λ_ρ_0(Γ)⊂∂^p,q-1 is negative or positive;*the connected component 𝒯_ρ_0 of ρ_0 in the space of P_1^p,q-Anosov representations from Γ to (p,q) is a neighborhood of ρ_0 in (Γ,(p,q)) consisting entirely of representations ρ with negative proximal limit set Λ_ρ(Γ)⊂∂^p,q-1 or entirely of representations ρ with positive proximal limit set Λ_ρ(Γ)⊂∂^p,q-1.Here is an immediate consequence of this result and of Theorem <ref>. In this setting, either ρ(Γ) is ^p,q-1-convex cocompact for all ρ∈𝒯_ρ_0, or ρ(Γ) is ^q,p-1-convex cocompact (after identifying (p,q) with (q,p)) for all ρ∈𝒯_ρ_0. This improves <cit.>, which assumed ρ to be irreducible.Here are two examples. We refer to <cit.> for more details.Let Γ be the fundamental group of a convex cocompact (closed) hyperbolic manifold M of dimension m≥ 2, with holonomy σ_0 : Γ→(m,1)=Isom(^m). The representation σ_0 lifts to a representation σ_0 : Γ→ H:=(m,1). For p,q∈^* with p≥ m, q ≥ 1, let τ : (m,1)→(p,q) be induced by the natural embedding ^m,1↪^p,q. Then ρ_0 := τ∘σ_0 : Γ→(p,q) is P_1^p,q-Anosov, and one checks that the set Λ_ρ_0(Γ)⊂∂^p,q-1 is negative. Let 𝒯_ρ_0 be the connected component of ρ_0 in the space of P_1^p,q-Anosov representations from Γ to (p,q). By Corollary <ref>, for any ρ∈𝒯_ρ_0, the group ρ(Γ) is ^p,q-1-convex cocompact. By <cit.>, when p=m and q=2 and when the hyperbolic m-manifold M is closed, the space 𝒯_ρ_0 of Example <ref> is a full connected component of (Γ,(p,2)), consisting of so-called AdS quasi-Fuchsian representations; that ρ(Γ) is ^p,1-convex cocompact in that case follows from <cit.>.For n≥ 2, letτ_n : (^2) ⟶(^n)be the irreducible n-dimensional linear representation of (^2) obtained from the action of (^2) on the (n-1)^st symmetric power Sym^n-1(^2) ≃^n. The image of τ_n preserves the nondegenerate bilinear form B_n := -ω^⊗ (n-1) induced from the area form ω of ^2. This form is symmetric if n is odd, and antisymmetric (symplectic) if n is even.Suppose n = 2m+1 is odd. The symmetric bilinear form B_n has signature(k_n, ℓ_n) := {[ (m+1,m)if m is odd,; (m,m+1) if m is even. ].If we identify the orthogonal group (B_n) (containing the image of τ_n) with (k_n,ℓ_n), then there is a unique τ_n-equivariant embedding ι_n : ∂_∞^2↪∂_^k_n,ℓ_n-1, and an easy computation shows that its image Λ_n := ι(∂_∞^2) is negative.For p ≥ k_n and q ≥ℓ_n, the representation τ_n : _2()→(B_n)≃(k_n,ℓ_n) and the natural embedding ^k_n,ℓ_n↪^p,q induce a representation τ : H = _2() →(p,q) whose image contains an element which is proximal in ∂^p,q-1, and a τ-equivariant embedding ι : ∂_∞^2↪∂_^k_n,ℓ_n-1↪∂_^p,q-1. The set Λ:=ι(∂_∞^2)⊂∂_^p,q-1 is negative by construction.Let Γ be the fundamental group of a convex cocompact orientable hyperbolic surface, with holonomy σ_0 : Γ→_2(). The representation σ_0 lifts to a representation σ_0 : Γ→ H:=_2(). Let ρ_0 := τ∘σ_0 : Γ→ G:=(p,q). The proximal limit set Λ_ρ_0(Γ) = ι(Λ_σ_0(Γ)) ⊂Λ is negative. Thus Corollary <ref> implies that ρ_0(Γ) is ^p,q-1-convex cocompact and so is ρ(Γ) where ρ is any representation in the connected component 𝒯_ρ_0 of ρ in the space of P_1^p,q-Anosov representations from Γ to (p,q).It follows from <cit.> (see <cit.>) that when (p,q) = (k_n, ℓ_n) or(m+1,m+1) and when Γ is a closed surface group, the space 𝒯_ρ_0 of Example <ref> is a full connected component of (Γ,(p,q)), consisting of so-called Hitchin representations. Example <ref> thus states the following. Let Γ be the fundamental group of a closed orientable hyperbolic surface and let m≥ 1.For any Hitchin representation ρ : Γ→(m+1,m), the group ρ(Γ) is ^m+1,m-1-convex cocompact if m is odd, and ^m,m-convex cocompact if m is even.For any Hitchin representation ρ : Γ→(m+1,m+1), the group ρ(Γ) is ^m+1,m-convex cocompact. By <cit.>, when p = m+1 = 2 and Γ is a closed surface group, the space 𝒯_ρ_0 of Example <ref> is a full connected component of (Γ,(2,q)), consisting of so-called maximal representations. Example <ref> thus states the following.Let Γ be the fundamental group of a closed orientable hyperbolic surface and let q≥ 1. Any connected component of (Γ,(2,q)) consisting of maximal representations and containing a Fuchsian representation ρ_0 : Γ→(2,1)_0 ↪(2,q) consists entirely of ^2,q-1-convex cocompact representations. §.§.§ Groups with connected boundaryWe now briefly discuss a class of examples that does not necessarily come from the deformation of “Fuchsian” representations as above.Suppose the word hyperbolic group Γ has connected boundary ∂_∞Γ (for instance Γ is the fundamental group of a closed negatively-curved Riemannian manifold). By <cit.>, any connected component in the space of P_1^p,q-Anosov representations from Γ to (p,q) consists entirely of representations ρ with negative proximal limit set Λ_ρ(Γ)⊂∂^p,q-1 or entirely of representations with ρ with positive proximal limit set Λ_ρ(Γ)⊂∂^p,q-1. Theorem <ref> then implies that for any connected component 𝒯 in the space of P_1-Anosov representations of Γ with values in (p,q)⊂(^p+q), either ρ(Γ) is ^p,q-1-convex cocompact for all ρ∈𝒯, or ρ(Γ) is ^q,p-1-convex cocompact for all ρ∈𝒯, as in Corollary <ref>.This applies for instance to the case that Γ is the fundamental group of a closed hyperbolic surface and 𝒯 is a connected component of (Γ,(2,q)) consisting of maximal representations <cit.>. By <cit.>, the proximal limit set is negative for all representations in any such 𝒯, hence these are ^2,q-1 convex cocompact.By <cit.>, for q=3 there exist such connected components 𝒯 that consist entirely of Zariski-dense representations, hence that do not come from the deformation of “Fuchsian” representations as in Corollary <ref>.§ EXAMPLES OF GROUPS WHICH ARE CONVEX COCOMPACT IN (V)In Section <ref> we constructed examples of discrete subgroups of (p,q) ⊂(^p+q) which are ^p,q-1-convex cocompact; these groups are strongly convex cocompact in (^p+q) by Theorem <ref>. We now discuss several constructions of discrete subgroups of (V) which are convex cocompact in (V) but which do not necessarily preserve a nonzero quadratic form on V.Based on Theorem <ref>.<ref>–<ref>, another fruitful source of groups that are convex cocompact in (V) is the continuous deformation of “Fuchsian” groups, coming from an algebraic embedding. These “Fuchsian” groups can be for instance: *convex cocompact subgroups (in the classical sense) of appropriate rank-one Lie subgroups H of (V), as in Section <ref>;*discrete subgroups of (V) dividing a properly convex open subset of some projective subspace (V_0) of (V) and acting trivially on a complementary subspace.The groups of (<ref>) are always word hyperbolic; in Section <ref>, we explain how to choose H so that they are strongly convex cocompact in (V), and we prove Proposition <ref>. The groups of (<ref>) are not necessarily word hyperbolic; they are convex cocompact in (V) by Theorem <ref>.<ref>, but not necessarily strongly convex cocompact; we discuss them in Section <ref>.We also mention other constructions of convex cocompact groups in Sections <ref> and <ref>, which do not involve any deformation. §.§ “Quasi-Fuchsian” strongly convex cocompact groupsWe start by discussing a similar construction to Section <ref>, but for discrete subgroups of (V) which do not necessarily preserve a nonzero quadratic form on V.Let Γ be a word hyperbolic group, H a real semisimple Lie group of real rank one, and τ : H→(V) a representation whose image contains a proximal element. Any (classical) convex cocompact representation σ_0 :⁠Γ→⁠ H is Anosov, hence the composition τ∘σ_0 : Γ→(V) is P_1-Anosov (see <cit.> and <cit.>). In particular, by Theorem <ref>, the group τ∘σ_0(Γ) is strongly convex cocompact in (V) as soon as (V) ∖⋃_η∈∂_∞Γξ^*(η) admits a τ∘σ_0(Γ)-invariant connected component, where ξ^*: ∂_∞Γ→(V^*) denotes the Anosov boundary map in dual projective space of τ∘σ_0.Suppose this is the case. By Theorem <ref>.<ref> (see Remark <ref>), the group ρ(Γ) remains strongly convex cocompact in (V) for any ρ∈(Γ,(V)) close enough to τ∘σ_0. Sometimes ρ(Γ) also remains strongly convex cocompact for some ρ which are continuous deformations of τ∘σ_0 quite far away from τ∘σ_0; we now discuss this in view of proving Proposition <ref>.§.§.§ Connected open sets of strongly convex cocompact representations We prove the following.Let Γ be a word hyperbolic group and 𝒜 a connected open subset of (Γ,(V)) consisting entirely of P_1-Anosov representations. If ρ(Γ) is convex cocompact in (V) for some ρ∈𝒜, then ρ(Γ) is convex cocompact in (V) for all ρ∈𝒜.First, observe that the finite normal subgroups Ker(ρ) ⊂Γ are constant over ρ∈𝒜, since representations of a finite group are rigid up to conjugation. Hence by passing to the quotient group Γ/Ker(ρ), we may assume all representations ρ∈𝒜 are faithful.By Theorem <ref>.<ref>, the property of being convex cocompact in (V) is open in (Γ,(V)). Thus the subset of ρ∈𝒜 for which ρ(Γ) is convex cocompact is open.Let us show that it is also closed. Consider a sequence of representations ρ_m ∈𝒜 converging to ρ∈𝒜. Assume that ρ_m(Γ) is convex cocompact for all m, and let us show that ρ is also convex cocompact.Let ξ_m : ∂_∞Γ→(V) and ξ_m^*: ∂_∞Γ→(V^*) be the boundary maps for the Anosov representation ρ_m, and ξ : ∂_∞Γ→(V) and ξ^*: ∂_∞Γ→(V^*) those for ρ. By <cit.>, the maps ξ_m (ξ_m^*) converge uniformly to ξ (ξ^*). By Theorem <ref>, for any m, the set (V) ∖⋃_η∈∂_∞Γξ_m^*(η) admits a ρ_m(Γ)-invariant connected component Ω_m. After passing to a subsequence, the compact subsets Ω_m converge to a nonempty compact subset 𝒦 of (V) which is invariant under ρ(Γ). Note that for each m, any open segment (a,b) in Ω_m is either contained in or disjoint from each supporting hyperplane ξ_m^*(η) for η∈∂_∞Γ. This property passes to the limit: (⋆)For each open segment (a,b) in 𝒦 and each hyperplane ξ^*(η) for η∈∂_∞Γ, (a,b) is either contained in or disjoint from ξ^*(η). Suppose first that there is a point x ∈𝒦 which is not contained in any hyperplane ξ^*(η) for η∈∂_∞Γ. By compactness of ∂_∞Γ, there is an open subset U ∋ x which does not intersect ξ^*(η) for all η∈∂_∞Γ. It follows that a slightly smaller open set U' ∋ x does not intersect any ξ_m^*(η) for η∈∂_∞Γ and m sufficiently large, and hence that U' is contained in Ω_mfor all m sufficiently large, and hence that U' is contained in 𝒦. Thus the interior of 𝒦 is nonempty. This interior is ρ(Γ)-invariant and, by property (⋆), it is contained in (V) ∖⋃_η∈∂_∞Γξ^*(η). This shows that (V) ∖⋃_η∈∂_∞Γξ^*(η) admits a ρ(Γ)-invariant connected component. It follows from the implication <ref> ⇒<ref> of Theorem <ref> that ρ(Γ) is convex cocompact, as desired. Suppose then that any point in 𝒦 is contained in ξ^*(η) for some η∈∂_∞Γ. By considering such a point in the relative interior of 𝒦 and using property (⋆) above, we see that all of 𝒦 is contained in ξ^*(η). Since ξ_m(∂_∞Γ) ⊂Ω_m for all m and the ξ_m converge uniformly to ξ, it follows that ξ(∂_∞Γ) ⊂𝒦, hence ξ(∂_∞Γ) ⊂ξ^*(η). This contradicts the transversality of the boundary maps ξ and ξ^* (property <ref> in Definition <ref>). §.§.§ Hitchin representations We now prove Proposition <ref> by applying Proposition <ref> in the following specific context.Let Γ be the fundamental group of a closed orientable hyperbolic surface S. For n≥ 2, let τ_n : (^2) →(^n) be the irreducible n-dimensional linear representation from (<ref>). We still denote by τ_n the representation (^2) →(^n) obtained by modding out by {± I}. A representation ρ∈(Γ,(^n)) is said to be Fuchsian if it is of the form ρ = τ_n∘ρ_0 where ρ_0: Γ→(2,) is discrete and faithful. By definition, a Hitchin representation is a continuous deformation of a Fuchsian representation; the Hitchin component Hit_n(S) is the space of Hitchin representations ρ∈(Γ, (^n)) modulo conjugation by (^n). Hitchin <cit.> used Higgs bundles techniques to parametrize Hit_n(S), showing in particular that it is homeomorphic to a ball of dimension (n^2 -1)(2g-2). Labourie <cit.> proved that any Hitchin representation is P_1-Anosov (in fact it has the stronger property of being Anosov with respect to a minimal parabolic subgroup of (^n)).In order to prove Proposition <ref>, we first consider the case of Fuchsian representations. Let ρ: Γ→(^2) →(^n) be Fuchsian. *If n is odd, then ρ(Γ) is strongly convex cocompact in (^n).*If n is even, then the boundary map of the P_1-Anosov representation ρ defines a nontrivial loop in (^n) and ρ(Γ) does not preserve any nonempty properly convex open subset of (^n).For even n, the fact that a Fuchsian representation ρ cannot preserve a properly convex open subset of (^n) also follows from <cit.>, since in this case ρ takes values in the projective symplectic group PSp(n/2,). (<ref>) Suppose n = 2k+1 is odd. Then ρ takes values in the projective orthogonal group (k, k+1)≃(k,k+1), and so ρ(Γ) is strongly convex cocompact in (^n) by <cit.>.(<ref>) Suppose n = 2k is even. Then ρ takes values in the symplectic group (n,). It is well known that, in natural coordinates identifying ∂_∞Γ≃^1 with (^2), the boundary map ξ: ∂_∞Γ→(^n) of the P_1-Anosov representation ρ is the Veronese curve(^2) ∋ [x,y] ⟼ [( x^n-1, x^n-2y, …, xy^n-2, y^n-1 )] ∈(^n).This map is homotopic to the map [x,y] ↦ [(x^n-1, 0, …, 0, y^n-1)] which, since n-1 is odd, is a homeomorphism from (^2) to the projective plane spanned by the first and last coordinate vectors, hence is nontrivial in π_1((^n)). Hence the image of the boundary map ξ crosses every hyperplane in (^n). It follows that ρ(Γ) does not preserve any properly convex open subset of (^n), because the image of ξ must lie in the boundary of any ρ(Γ)-invariant such set.(<ref>) Suppose n is odd. By <cit.>, all Hitchin representations ρ : Γ→(^n) are P_1-Anosov. Moreover, ρ(Γ) is convex cocompact in (V) as soon as ρ is Fuchsian, by Lemma <ref>.(<ref>). Applying Proposition <ref>, we obtain that for any Hitchin representation ρ : Γ→(^n) the group ρ(Γ) is convex cocompact in (V), hence strongly convex cocompact since Γ is word hyperbolic (Theorem <ref>).(<ref>) Suppose n is even. By Lemma <ref>, for any Fuchsian ρ : Γ→(^n), the image of the boundary map of the P_1-Anosov representation ρ defines a nontrivial loop in (^n). Since the assignment of a boundary map to an Anosov representation is continuous <cit.>, we obtain that for any Hitchin ρ : Γ→(^n), the image of the boundary map of the P_1-Anosov representation ρ defines a nontrivial loop in (^n). In particular, ρ(Γ) does not preserve any nonempty properly convex open subset of (^n), by the same reasoning as in the proof of Lemma <ref>. More generally, ρ does not preserve any nonempty properly convex subset of (^n), since any such set would have nonempty interior by irreducibility of the Hitchin representation ρ (see <cit.>).§.§ Convex cocompact groups coming from divisible convex setsRecall that any discrete subgroup Γ of (V) dividing a properly convex open subset Ω⊂(V) is convex cocompact in (V) (Example <ref>). The group Γ is strongly convex cocompact if and only if it is word hyperbolic (Theorem <ref>), and this is equivalent to the fact that Ω is strictly convex (by <cit.> or Theorem <ref> again).In this section, we produce convex cocompact groups which are not strongly convex cocompact and do not divide convex domains. We do this via two constructions: one by deformation (Section <ref>) and one by restriction to subgroups (Section <ref>). Both constructions start with a group dividing a properly convex but not strictly convex open set Ω⊂(V); we now recall how such groups can be obtained.§.§.§ Groups Γ dividing a properly convex but not strictly convex open setExamples have been obtained in several ways:*letting a cocompact lattice of (^m) act on the Riemannian symmetric space of (^m), realized as a properly convex open subset Ω_m of (^m(m+1)/2) as follows: identify ^m(m+1)/2 with the space of symmetric (m× m) real matrices, and let Ω_m be the projectivization of the positive definite symmetric matrices (the corresponding examples are called symmetric; there are also complex, quaternionic, and octonionic variants: see <cit.>);*letting certain Coxeter groups act by reflections on (^n), for 4≤ n ≤ 7, with maximal abelian subgroups isomorphic to ^n-2: see <cit.>;*deforming the holonomies of certain hyperbolic 3-manifolds in (^4) so that the cusp groups become diagonalizable (the resulting groups are convex cocompact: see Lemma <ref> below), and doubling across peripheral tori <cit.> (this idea is already implicit in the examples of <cit.>);*letting other Coxeter groups, with maximal abelian subgroups isomorphic to ^n-3, divide a properly convex open subset of (^n), for 5≤ n≤ 7,using a Dehn filling construction <cit.>. In <ref> for n=4, as well as in <ref>, there are subgroups (virtually) isomorphic to ^2 that stabilize PETs (all pairwise disjoint) in Ω⊂(^4): Benoist <cit.> proved that this is in fact always the case for nonhyperbolic divisible convex sets in (^4); the group then splits as a graph of groups, with the PET stabilizers as edge groups, and the PETs project to the JSJ decomposition of Γ\Ω in the sense of Thurston's geometrization (all pieces are hyperbolic).§.§.§ Convex cocompact deformations of groups dividing a convex setLet Γ be a discrete subgroup of (V) dividing a nonempty properly convex open subset Ω of (V). Let Γ̂ be the lift of Γ to ^±(V) that preserves a properly convex cone of V lifting Ω (see Remark <ref>). By Theorem <ref>.<ref>–<ref>, any small deformation of the inclusion of Γ̂ into a larger projective linear group (V ⊕ V') is convex cocompact in (V ⊕ V'). The issue is to find nontrivial such deformations: for instance, there are none in case <ref> above for m≥ 3, by Margulis superrigidity. The situation is more favorable in cases <ref> and <ref> above, for instance when n=⁠ 4: the group splits along the PET stabilizers, hence the inclusion of Γ̂ into (^4 ⊕^n') may be deformed by a Johnson–Millson bending <cit.>. (Note that such deformations exist already when n'=0, so-called bulging deformations.) Since the PET stabilizers in Γ̂⊂^±(^4) have 1 as an eigenvalue, the bending matrices may mix the summands of ^4 ⊕^n', so that the convex core is no longer contained in a copy of (^4), and the group may even act irreducibly on (^4+n'), see the forthcoming paper <cit.>. Small such deformations give examples of discrete subgroups which are convex cocompact in (^4 ⊕^n') without being word hyperbolic, and which do not divide a properly convex set.§.§.§ Convex cocompact actions coming from divisible convex sets by taking subgroupsNonhyperbolic groups dividing a properly convex open set Ω⊂(^4) admit nonhyperbolic subgroups that are still convex cocompact in (^4), but that do not divide any properly convex open subset of (^4).Indeed, let Γ be a torsion-free, discrete subgroup of (^4) dividing a nonempty properly convex open subset Ω of (^4) containing PETs, as above.By <cit.>, the PETs descend to a finite collection 𝒯 of pairwise disjoint planar tori and Klein bottles in the closed manifold N := Γ\Ω. The universal cover of one connected component M of N∖𝒯 identifies with a convex subset _M of Ω whose interior is a connected component of the complement in Ω of the union of all PETs, and whose nonideal boundary _M is a disjoint union of PETs. The fundamental group of M identifies with the subgroup Γ_M of Γ that preserves _M, and it acts properly discontinuously and cocompactly on _M. The action of Γ_M on Ω is convex cocompact.By Corollary <ref>.(<ref>), it is enough to check that _Ω(Γ_M) ⊂_M. Since each component of _M is planar, _M is the convex hull of its ideal boundary _M, and so it is enough to show that _Ω(Γ_M) ⊂_M. Suppose by contradiction that this is not the case: namely, there exists x ∈Ω∖_M and a sequence (γ_m) in Γ_M such that (γ_m· x) converges to some x_∞∈_Ω(Γ_M) ∖_M. Let y be the point of _M which is closest to x for the Hilbert metric d_Ω; it is contained in a PET of _M. Let a,b∈∂Ω be such that a,x,y,b are aligned in that order. Up to taking a subsequence, we may assume that (γ_m· a), (γ_m· y), (γ_m· b) converge respectively to some a_∞, y_∞, b_∞∈∂Ω, with y_∞∈_M and a_∞, x_∞, y_∞, b_∞ aligned in that order. Since γ_m· aγ_m· xγ_m· yγ_m· b = axyb∈ (1,+∞) for all m, and x_∞≠ y_∞, and all segments [γ_m · a, γ_m · b] and [a_∞, b_∞] lie in an affine chart containing Ω, the points a_∞, x_∞, y_∞, b_∞ are pairwise distinct and contained in a segment of ∂Ω.However, any segment on ∂Ω lies on the boundary of some PET <cit.>. Thus we have found a PET whose closure intersects the closure of _M but is not contained in it; its closure must cross the closure of a second PET on _M, contradicting the fact <cit.> that PETs have disjoint closures.§.§ Convex cocompact groups as free productsBeing convex cocompact in (V) is a much more flexible property than dividing a properly convex open subset of (V), and there is a rich world of examples, which we shall explore in forthcoming work <cit.>. In particular, we shall prove the following. Let Γ_1 and Γ_2 be infinite discrete subgroups of (V) which are convex cocompact in (V) but do not divide any nonempty properly convex open subset of (V). Then there exists g∈(V) such that the group generated by Γ_1 and gΓ_2g^-1 is isomorphic to the free product Γ_1∗Γ_2 and is convex cocompact in (V).This yields many examples of non word hyperbolic convex cocompact groups.For instance, one could take Γ_1=Γ_2 ⊂(V') equal to one of the superrigid symmetric examples <ref> of Section <ref>, embedded into (V) for some V=V'⊕^n. § SOME OPEN QUESTIONSHere we list some open questions about discrete subgroups Γ of (V) that are convex cocompact in (V). The case that Γ is word hyperbolic boils down to Anosov representations by Theorem <ref> and is reasonably understood; on the other hand, the case that Γ is not word hyperbolic corresponds to a new class of discrete groups whose study is still in its infancy. Most of the following questions are interesting even in the case that Γ divides a (nonstrictly convex) properly convex open subset of (V).We fix a discrete subgroup Γ of (V) acting convex cocompactly on a nonempty properly convex open subset Ω of (V).Which finitely generated subgroups Γ' of Γ are convex cocompact in (V)? Such subgroups include all finite-index subgroups of Γ (see Lemma <ref>). In general, they are quasi-isometrically embedded in Γ, by Corollary <ref>. Conversely, when Γ is word hyperbolic, any quasi-isometrically embedded (or equivalently quasi-convex) subgroup Γ' of Γ is convex cocompact in (V): indeed, the boundary maps for the P_1-Anosov representation Γ↪(V) induce boundary maps for Γ'↪(V) that make it P_1-Anosov, hence Γ' is convex cocompact in (V) by Theorem <ref>.When Γ is not word hyperbolic, Question <ref> becomes more subtle. For instance, let Γ≃^2 be the subgroup of (^3) consisting of all diagonal matrices whose entries are powers of some fixed t>0; it is convex cocompact in (^3) (see Example <ref>). Any cyclic subgroup Γ' = ⟨γ' ⟩ of Γ is quasi-isometrically embedded in Γ. However, Γ' is convex cocompact in (V) if and only if γ' has distinct eigenvalues (see Examples <ref>.(<ref>)–(<ref>)). Assume Ω is indecomposable. Under what conditions is Γ relatively hyperbolic, relative to a family of virtually abelian subgroups? This is always the case if (V)≤ 3. If (V)=4 and Γ divides Ω, then this is also seen to be true from work of Benoist <cit.>: in this case there are finitely many conjugacy classes of PET stabilizers in Γ, which are virtually ^2, and Γ is relatively hyperbolic with respect to the PET stabilizers (using <cit.>). However, when (V)=m(m-1)/2 for some m≥ 3, we can take for Ω⊂(V) the projective realization of the Riemannian symmetric space of _m() (see <cit.>) and for Γ a uniform lattice of _m(): this Γ is not relatively hyperbolic with respect to any subgroups, see <cit.>. If Γ is not word hyperbolic, must there be a properly embedded maximal k-simplex invariant under some subgroup isomorphic to ^k for some k≥ 2? This question is a specialization, to the class of convex cocompact subgroups of (V), of the following more general question (see <cit.>): if 𝒢 is a finitely generated group admitting a finite K(𝒢,1), must it be word hyperbolic as soon as it does not contain any Baumslag–Solitar group BS(m,n)? Note that if |n| = |m|, then BS(m,n) contains ^2 (and indeed BS(1,1) = ^2), while if |n| ≠ |m|, then any linear embedding of BS(m,n) contains unipotent elements. Hence, by Theorem <ref>.<ref>, our convex cocompact group Γ⊂(V) contains a Baumslag–Solitar group if and only if it contains ^2.In light of the equivalence  <ref> ⇔ <ref> of Theorem <ref>, we ask the following: When Γ is not word hyperbolic, is there a dynamical description, similar to P_1-Anosov, that characterizes the action of Γ on (V), for example in terms of _Ω(Γ) and divergence of Cartan projections? While this question is vague, a good answer could lead to the definition of new classes of nonhyperbolic discrete subgroups in other higher-rank reductive Lie groups for which there is not necessarily a good notion of convexity.A variant of the following question was asked by Olivier Guichard. In (Γ,(V)), does the interior of the set of naively convex cocompact representations consist of convex cocompact representations? Here we say that a representation is naively convex cocompact (convex cocompact) if it is faithful and if its image is naively convex cocompact in (V) (convex cocompact in (V)) in the sense of Definition <ref> (Definition <ref>). § LIMITS OF HILBERT BALLSFor n≥ 3, let Ω⊂(^n) be a properly convex open set. As in Definition <ref>, for any z ∈∂Ω, the open face F of ∂Ω at z is the union of {z} and of all open segments of ∂Ω containing z. It is the largest convex subset of ∂Ω containing z which is relatively open, in the sense that it is open in the projective subspace (W) that is spans. In particular, we can consider the Hilbert metric d_F on F seen as a properly convex open subset of (W).Let R>0. By Lemma <ref>.(<ref>), any Hausdorff limit of R-balls of (Ω,d_Ω) whose centers converge to z, is contained in the R-ball of (F,d_F) centered at z. In this appendix, we investigate to what extent the limit may be smaller than an R-ball.The following example shows that the limit may be a point even when the face F is not a point. Thus <cit.> is not correct as stated.Consider ^3 as a properly convex open subset of (^4) as in Example <ref>. Let γ∈Isom(^3) ⊂(^4) be a unipotent element. Then γ fixes pointwise a certain projective line ℓ of (^4) tangent to ∂^3 at a point z. Let (a,b) be an open segment of ℓ containing z, and let Ω be the interior of the convex hull of ^3 ∪ (a,b). By construction, γ preserves Ω, and F := (a,b) is a face of ∂Ω. For any compact subset B of ^3 (or indeed of (^4) ∖ℓ), we have γ^n· B →{z}⊂ F as n→ +∞. In particular, we can take B to be a closed ball of (Ω, d_Ω); the sets γ^n· B are then all balls of the same radius in Ω which limit to a point in F. The following lemma states that in the case that the centers of the R-balls converge conically to z (Definition <ref>), the Hausdorff limit does contain a nontrivial ball of (F,d_F) centered at z, possibly of smaller radius. For R>0 and x∈Ω (z∈ F⊂∂Ω), we denote by 𝔹_Ω(x,R) (𝔹_F(z,R)) the closed ball of radius R centered at x (z) in (Ω,d_Ω) ((F,d_F)).Let Ω be a properly convex open subset of (V) and (x_m)_m∈ a sequence of points of Ω converging to some z∈∂Ω. Suppose there exist a ray [y,z) ⊂Ω and a constant D>0 such that d_Ω(x_m,[y,z)) ≤ D for all m∈. Let F be the open face of z in ∂Ω. Then for any R≥ 0, any Hausdorff limit (accumulation point for the Hausdorff topology) of the balls 𝔹_Ω(x_m, R) is a subset of F containing the ball 𝔹_F(z, f_D(R)) wheref_D(R) := 1/2log ( 1+ e^2R-1/e^2D ) ≥ R e^-2D.Note that the function f_D: _+ →_+ is convex for each D≥ 0, with f_0=Id_^+. We have f_D(R) ∼ R e^-2D as R → 0, and f_D(R)=R-D+o(1) as R→ +∞. When Ω has dimension 2, no loss occurs: Lemma <ref> holds with f_D replaced by Id_^+ (in the proof below, (a_∞, b_∞)=(a,b) automatically). Up to passing to a subsequence, we may assume that the balls 𝔹_Ω(x_m, R) admit a Hausdorff limit. This limit is a closed convex subset of (V) which is contained in F⊂∂Ω by Lemma <ref>.(<ref>). It is sufficient to prove that for any maximal open segment (a,b) ⊂ F containing z, the limit of the 𝔹_Ω(x_m, R) contains the point of (z,b) at d_F-distance f_D(R) from z. We now fix such a segment (a,b).For each m∈, choose y_m ∈ [y,z) such that d_Ω(x_m, y_m) ≤ D.Consider an arbitrary number h>R (later we will take h→∞), and let z' ∈ (z,b) be such that d_F(z, z')=h. For each m∈, choose y'_m ∈ [y, z') such that the line through y_m and y'_m intersects [y,a) and [y,b). Since Ω contains the open triangle T:=(y,a,b), we have d_Ω(y_m, y'_m) ≤ d_T(y_m, y'_m) = ⁠ h (Remark <ref>). By the triangle inequality, d_Ω (x_m, y'_m) ≤ d_Ω (x_m, y_m) + d_Ω (y_m, y'_m) ≤ D+h. Let a_m, b_m ∈∂Ω be such that a_m, x_m, y'_m, b_m are aligned in this order. In dimension ≥ 3, it is not necessarily the case that a_m → a or b_m → b.However, up to passing to a subsequence we may assume that a_m → a_∞ and b_m → b_∞ for some a_∞,b_∞∈∂Ω with (a_∞, b_∞) ⊂ (a,b). As in the proof of Lemma <ref>.(<ref>), we haved_(a_∞, b_∞)(z, z') ≤lim sup_m→ +∞ d_Ω (x_m, y'_m) ≤ D + h.For any m, the point w_m ∈ (x_m, b_m) with a_mx_mw_mb_m = e^2R belongs to 𝔹_Ω(x_m, R). Therefore the limit of the 𝔹_Ω(x_m, R) contains the point w∈ (a,b) such that a_∞zwb_∞=e^2R.The assumption h>R implies w∈ (z,z'): indeed,0 < d_(a_∞, b_∞)(z,w) = R < h = d_(a,b)(z,z') ≤ d_(a_∞, b_∞)(z,z')since (a_∞, b_∞) ⊂ (a,b). It is sufficient to prove that for any ε>0, if h has been chosen large enough, then d_F(z,w) ≥ f_D(R) - ε.We map (span{a,b}) to the standard projective line by identifying a,z,z',b with 0,1,e^2h, ∞∈^1(), so that0 = a ≤ a_∞< z = 1 < w < e^2h = z' < b_∞≤ b =∞and we aim to show w≥ e^2 (f_D(R)-ε). The number Δ:= a_∞zz'b_∞ satisfiese^2h(<ref>)≤Δ(<ref>)≤ e^2(h+D)andΔ(<ref>)=a_∞1e^2hb_∞. Let us express a_∞ and w in terms of (Δ, b_∞)∈ [e^2h, e^2(h+D)]× (e^2h,+∞] and of the fixed parameters h, R:a_∞=(b_∞ - e^2h)Δ - e^2h(b_∞-1)/(b_∞ - e^2h)Δ - (b_∞-1)by the equality of (<ref>); w= b_∞ e^2R(1 - a_∞) + a_∞ (b_∞-1)/e^2R(1 - a_∞) + (b_∞-1)using z=1 and (<ref>)= 1 + (e^2R-1)(e^2h-1)/Δ-1 + (e^2R-1)(e^2h-1)^2 (Δ-e^2R)(Δ-1)^-1/(b_∞-e^2h)(Δ-1) + (e^2R-1)(e^2h-1)by substituting (<ref>) for a_∞ and a routine computation. In (<ref>), the variable b_∞ appears only once, and every bracket is positive, using (<ref>)–(<ref>) and h>R. Hence,min_Δ∈ [e^2h, e^2(h+D)] min_b_∞∈ (e^2h,+∞]w = 1 + (e^2R-1)(e^2h-1)/e^2(h+D)-1h → +∞⟶ 1+e^2R-1/e^2D = e^2 f_D(R).The following example shows that the function f_D of Lemma <ref> is best possible. Fix R,D>0.Let a:=(-1,0,0), b:=(1,0,0), and for each 0<ε < 1 < p, consider the properly convex open setΩ_ε, p : = Int ( Conv{ a,b, (0,t,t^p)_t≥ 0, (0,t,|ε t|^p)_t≤ 0} ) ⊂^3 ⊂^3().The open segment F := (a,b) is the open face of Ω_ε,p at z:=(0,0,0) ∈∂Ω_ε,p. The map γ_p: (u,v,w) ↦ (u, v/2, w/2^p) preserves Ω_ε,p, and preserves the axis L := {(0,0,t)_t>0} with endpoint z. Let x_0:=(0,1-e^-2D,1) ∈Ω_ε,p and x_m := γ_p^m · x_0 ∈Ω_ε,p. As (p,ε) → (+∞,0), the domain Ω_ε,p converges for the Hausdorff topology to Ω_∞:=Π×_>0 whereΠ = {(u,v) ∈^2  |  -1 < u<1,  v<1-|u| }(projectively, Ω_∞ is equivalent to a square-based pyramid). Therefore d_Ω_ε,p(x_0, L) converges to d_Ω_∞(x_0, L) = D as (p,ε) → (+∞,0). Moreover, the balls 𝔹_Ω_ε,p (x_0, R) converge, for the Hausdorff topology in ^3(), to 𝔹_Ω_∞ (x_0, R). But the latter projects on the first axis to [-s,s], where s := (1+e^2D-R/sinh(R))^-1: this can be seen by computing balls in (Π, d_Π) (Figure <ref>), and using that the projection π_Π: Ω_∞→Π is 1-Lipschitz.Therefore, for large enough p and small enough ε, the Hausdorff limit of (γ_p^m·𝔹_Ω_ε,p(x_0, R))_m≥ 0 becomes arbitrarily Hausdorff-close to [-s,s]×{0}^2. Since 1/2log-10s1=f_D(R), it follows that: for any R'>f_D(R), there exist p>1 (large), ε>0 (small) and a sequence x_m =γ_p^m· x_0 → z in Ω_ε, p such that the x_m lie within D from the axis L but the Hausdorff limit of the 𝔹_Ω_ε,p(x_m,R) does not contain 𝔹_F(z, R').Note that we could also consider ⟨γ_p⟩-orbits in the domains Ω_p=lim_ε→ 0Ω_ε, p (for increasingly large p) and obtain similar estimates: but the Ω_ε, p have the clean feature that their closures intersect the supporting plane ^2×{0} precisely along F = [a,b]. DGKLM[BDL]bdl18 S. Ballas, J. Danciger, G.-S. Lee, Convex projective structures on non-hyperbolic three-manifolds, Geom. Topol. 22 (2018), p. 1593–1646.[Ba]bar15 T. Barbot, Deformations of Fuchsian AdS representations are quasi-Fuchsian, J. Differential Geom. 101 (2015), p. 1–46.[BM]bm12 T. Barbot, Q. Mérigot, Anosov AdS representations are quasi-Fuchsian, Groups Geom. Dyn. 6 (2012), p. 441–483.[BDM]bdm09 J. Behrstock, C. Drutu, L. Mosher, Thick metric spaces, relative hyperbolicity, and quasi-isometric rigidity, Math. Ann. 344 (2009), p. 543–595.[BK]bk02 N. Benakli, I. Kapovich, Boundaries of hyperbolic groups, in Combinatorial and geometric group theory (New York, 2000/Hoboken, NJ, 2001), p. 39–93, Contemporary Mathematics, vol. 296, American Mathematical Society, Providence, RI, 2002.[B1]ben97 Y. Benoist, Propriétés asymptotiques des groupes linéaires, Geom. Funct. Anal. 7 (1997), p. 1–47.[B2]ben00 Y. Benoist, Automorphismes des cônes convexes, Invent. Math. 141 (2000), p. 149–193.[B3]ben04 Y. Benoist, Convexes divisibles I, in Algebraic groups and arithmetic, Tata Inst. Fund. Res. Stud. Math. 17 (2004), p. 339–374.[B4]ben03 Y. Benoist, Convexes divisibles II, Duke Math. J. 120 (2003), p. 97–120.[B5]ben05 Y. Benoist, Convexes divisibles III, Ann. Sci. Éc. Norm. Sup. 38 (2005), p. 793–832.[B6]ben06 Y. Benoist, Convexes divisibles IV, Invent. Math. 164 (2006), p. 249–278.[B7]ben08 Y. Benoist, A survey on divisible convex sets, Adv. Lect. Math. 6 (2008), p. 1–18.[Be]bes-questions M. Bestvina, Questions in Geometric Group Theory, see <https://www.math.utah.edu/ bestvina/eprints/questions-updated.pdf>.[BeM]bm91 M. Bestvina, G. Mess, The boundary of negatively curved groups, J. Amer. Math. Soc. 4 (1991), p. 469–481.[BPS]bps J. Bochi, R. Potrie, A. Sambarino, Anosov representations and dominated splittings, J. Eur. Math. Soc. 21 (2019), p. 3343–3414.[Bo]bourbaki N. Bourbaki, Elements of Mathematics. General Topology, Parts I–IV, Hermann, Paris, 1966.[BCLS]bcls15 M. Bridgeman, R. D. Canary, F. Labourie, A. Sambarino, The pressure metric for Anosov representations, Geom. Funct. Anal. 25 (2015), p. 1089–1179.[BIW1]biw10 M. Burger, A. Iozzi, A. Wienhard, Surface group representations with maximal Toledo invariant, Ann. Math. 172 (2010), p. 517–566.[BIW2]biw14 M. Burger, A. Iozzi, A. Wienhard, Higher Teichmüller spaces: from (2,) to other Lie groups, Handbook of Teichmüller theory IV, p. 539–618, IRMA Lect. Math. Theor. Phys. 19, 2014.[BIW3]biw M. Burger, A. Iozzi, A. Wienhard, Maximal representations and Anosov structures, in preparation.[Bu]bus55 H. Busemann, The geometry of geodesics, Academic Press Inc., New York, 1955.[CGo]cg05 S. Choi, W. M. Goldman, The deformation space of convex _2-structures on 2-orbifolds, Amer. J. Math. 127 (2005) p. 1019–1102.[CLM]clm20 S. Choi, G.-S. Lee, L. Marquis, Convex projective generalized Dehn filling, Ann. Sci. Éc. Norm. Sup. 53 (2020), p. 217–266.[CGe]cg69 Y. Choquet-Bruhat, R. Geroch, Global aspects of the Cauchy problem in general relativity, Commun. Math. Phys. 14 (1969), p. 329–335.[CTT]ctt19 B. Collier, N. Tholozan, J. Toulisse, The geometry of maximal representations of surface groups into (2,n), Duke Math. J. 168 (2019), p. 2873–2949.[CLT1]clt15 D. Cooper, D. Long, S. Tillmann, On projective manifolds and cusps, Adv. Math. 277 (2015), p. 181–251.[CLT2]clt18 D. Cooper, D. Long, S. Tillmann, Deforming convex projective manifolds, Geom. Topol. 22 (2018), p. 1349–1404.[CM]cm14 M. Crampon, L. Marquis, Finitude géométrique en géométrie de Hilbert, with an appendix by C. Vernicos, Ann. Inst. Fourier 64 (2014), p. 2299–2377.[Dah]dah03 F. Dahmani, Combination of convergence groups, Geom. Topol. 7 (2003), p. 933–963.[Dan]dan13 J. Danciger, A geometric transition from hyperbolic to anti de Sitter geometry, Geom. Topol. 17 (2013), p. 3077–3134.[DGK1]dgk16 J. Danciger, F. Guéritaud, F. Kassel, Geometry and topology of complete Lorentz spacetimes of constant curvature, Ann. Sci. Éc. Norm. Supér. 49 (2016), p. 1–56.[DGK2]dgk-ccHpq J. Danciger, F. Guéritaud, F. Kassel, Convex cocompactness in pseudo-Riemannian hyperbolic spaces, Geom. Dedicata 192 (2018), p. 87–126, special issue Geometries: A Celebration of Bill Goldman's 60th Birthday.[DGK3]dgk-bad-ex J. Danciger, F. Guéritaud, F. Kassel, Examples and non-examples of convex cocompact groups in projective space, in preparation.[DGKLM]dgklm J. Danciger, F. Guéritaud, F. Kassel, G.-S. Lee, L. Marquis, Convex cocompactness for Coxeter groups, preprint, arXiv:2102.02757.[DK]dk-future J. Danciger, S. Kerckhoff, The transition from quasifuchsian manifolds to AdS globally hyperbolic spacetimes, in preparation.[DaGrKl]dgk63 L. Danzer, B. Grünbaum, V. Klee, Helly's theorem and its relatives, in Convexity (Proceedings of the Symposia on Pure Mathematics, vol. 7), p. 101–180, American Mathematical Society, Providence, RI, 1963.[E]eng78 R. Engelking, Dimension theory, North-Holland Mathematical Library, vol. 19, North-Holland Publishing Company, Amsterdam, 1978.[FG]fg06 V. V. Fock,A. B. Goncharov, Moduli spaces of local systems and higher Teichmüller theory, Publ. Math. Inst. Hautes Études Sci. 103 (2006), p. 1–211.[FK]fk05 T. Foertsch, A. Karlsson, Hilbert metrics and Minkowski norms, J. Geom. 83 (2005), p. 22–31.[Go]gol90 W. M. Goldman, Convex real projective structures on surfaces, J. Differential Geom. 31 (1990), p. 791–845.[GGKW]ggkw17 F. Guéritaud, O. Guichard, F. Kassel, A. Wienhard, Anosov representations and proper actions, Geom. Topol. 21 (2017), p. 485–584.[GW1]gw08 O. Guichard, A. Wienhard, Convex foliated projective structures and the Hitchin component for _4(), Duke Math. J. 144 (2008), p. 381–445.[GW2]gw10 O. Guichard, A. Wienhard, Topological invariants of Anosov representations, J. Topol. 3 (2010), p. 578–642.[GW3]gw12 O. Guichard, A. Wienhard, Anosov representations : Domains of discontinuity and applications, Invent. Math. 190 (2012), p. 357–438.[Gu]gui90 Y. Guivarc'h, Produits de matrices aléatoires et applications aux propriétés géométriques des sous-groupes du groupe linéaire, Ergodic Theory Dynam. Systems 10 (1990), p. 483–512.[H]hit92 N. J. Hitchin, Lie groups and Teichmüller space, Topology 31 (1992), p. 339–365.[JM]jm87 D. Johnson, J. J. Millson, Deformation spaces associated to compact hyperbolic manifolds, in Discrete groups in geometry and analysis, p. 48–106, Prog. Math. 67, Birkhäuser, Boston, MA, 1987.[Ka]kap-prop-disc M. Kapovich, A note on properly discontinuous actions, see <https://www.math.ucdavis.edu/ kapovich/EPR/prop-disc.pdf>.[KLPa]klp14 M. Kapovich, B. Leeb, J. Porti, Morse actions of discrete groups on symmetric spaces, preprint, arXiv:1403.7671.[KLPb]klp18 M. Kapovich, B. Leeb, J. Porti, Dynamics on flag manifolds: domains of proper discontinuity and cocompactness, Geom. Topol. 22 (2018), p. 157–234.[KLPc]klp-survey M. Kapovich, B. Leeb, J. Porti, Some recent results on Anosov representations, Transform. Groups 21 (2016), p. 1105–1121.[KL]kl06 B. Kleiner, B. Leeb, Rigidity of invariant convex sets in symmetric spaces, Invent. Math. 163 (2006), p. 657–676.[Ko]kos68 J.-L. Koszul, Déformation des connexions localement plates, Ann. Inst. Fourier 18 (1968), p 103–114.[L]lab06 F. Labourie, Anosov flows, surface groups and curves in projective space, Invent. Math. 165 (2006), p. 51–114.[LM]lm19 G.-S. Lee, L. Marquis, Anti-de Sitter strictly GHC-regular groups which are not lattices, Trans. Amer. Math. Soc. 372 (2019), p. 153–186.[Ma1]mar12 L. Marquis, Surface projective convexe de volume fini, Ann. Inst. Fourier 62 (2012), p. 325–392.[Ma2]mar12bis L. Marquis, Exemples de variétés projectives strictement convexes de volume fini en dimension quelconque, Enseign. Math. 58 (2012), p. 3–47.[Me]mes90 G. Mess, Lorentz spacetimes of constant curvature (1990), Geom. Dedicata 126 (2007), p. 3–45.[Q]qui05 J.-F. Quint, Groupes convexes cocompacts en rang supérieur, Geom. Dedicata 113 (2005), p. 1–19.[Rad]rad16 J. Radon, Über eine Erweiterung des Begriffs der konvexen Funktionen, mit einer Anwendung auf die Theorie der konvexen Körper, S.-B. Akad. Wiss. Wien 125 (1916), p. 241–258.[Rag]rag72 M. S. Raghunathan, Discrete subgroups of Lie groups, Springer, New York, 1972.[Se]sel60 A. Selberg, On discontinuous groups in higher-dimensional symmetric spaces (1960), in “Collected papers”, vol. 1, p. 475–492, Springer-Verlag, Berlin, 1989.[Su]sul85 D. Sullivan, Quasiconformal homeomorphisms and dynamics II: Structural stability implies hyperbolicity for Kleinian groups, Acta Math. 155 (1985), p. 243–260.[T]thu80 W. P. Thurston, The geometry and topology of three-manifolds, lecture notes, 1980.[Ve]vey70 J. Vey, Sur les automorphismes affines des ouverts convexes saillants, Ann. Sc. Norm. Super. Pisa 24 (1970), p. 641–665.[We]Weisman20 T. Weisman, Dynamical properties of convex cocompact actions in projective space, preprint, arXiv:2009:10994, 2020.[Wo]wolf J. A. Wolf, Spaces of constant curvature, sixth edition, AMS Chelsea Publishing, American Mathematical Society, Providence, RI, 2011.[Z]zim A. Zimmer, Projective Anosov representations, convex cocompact actions, and rigidity, J. Differential Geom. 119, p. 513-586, 2021.
http://arxiv.org/abs/1704.08711v4
{ "authors": [ "Jeffrey Danciger", "François Guéritaud", "Fanny Kassel" ], "categories": [ "math.GT", "math.GR" ], "primary_category": "math.GT", "published": "20170427184037", "title": "Convex cocompact actions in real projective geometry" }
Full-Page Text Recognition: Learning Where to Start and When to StopBastien Moysset14, Christopher Kermorvant2, Christian Wolf34 1A2iA SA, Paris,France 2Teklia SAS, Paris,France 3Université de Lyon, CNRS, France4INSA-Lyon, LIRIS, UMR5205, F-69621December 30, 2023 ====================================================================================================================================================================================================In error-correcting codes, locality refers to several different ways of quantifying how easily a small amount of information can be recovered from encoded data.In this work, we study a notion of locality called the s-Disjoint-Repair-Group Property (s-DRGP).This notion can interpolate between two very different settings in coding theory: that of Locally Correctable Codes (LCCs) when s is large—a very strong guarantee—and Locally Recoverable Codes (LRCs) when s is small—a relatively weaker guarantee.This motivates the study of the s-DRGP for intermediate s, which is the focus of our paper.We construct codes in this parameter regime which have a higher rate than previously known codes.Our construction is based on a novel variant of the lifted codes of Guo, Kopparty and Sudan.Beyond the results on the s-DRGP, we hope that our construction is of independent interest, and will find uses elsewhere. § INTRODUCTIONIn the theory of error correcting codes, locality refers to several different ways of quantifying how easily a small amount of information can be recovered from encoded data. Slightly more formally, suppose that ⊂Σ^N is a code over an alphabet Σ; that is,is any subset of Σ^N.Suppose that c ∈, and that we have query access to a noisy version c̃ of c.We are tasked with finding c_i ∈Σ for some i ∈ [N].Informally, we say that the codeexhibits good locality if we may recover c_i using very few queries to c̃.Of course, the formal definition of locality in this set-up depends on the nature of the noise, and the question is interesting for a wide variety of noise models.One (extremely strong) model of noise is that handled by Locally Correctable Codes (LCCs), which have been extensively studied in theoretical computer science for over 15 years.This model is motivated by a variety of applications in theoretical computer science and cryptography, including probabilistically checkable proofs (PCPs), derandomization, and private information retrieval (PIR); we refer the reader to <cit.> for an excellent survey on LCCs.In the LCC setting, c̃∈Σ^N has a constant fraction of errors: that is, we are guaranteed that the Hamming distance between c̃ and c is no more than δ N, for some small constant δ > 0.The goal is to recover c_i with high probability from Q = o(N) randomized queries to c̃. Another (much weaker) model of noise is that handled by Locally Recoverable Codes (LRCs) and related notions, which have been increasingly studied recently motivated by applications in distributed storage <cit.>.In this model, c̃∈ (Σ∪{})^N has a constant number of erasures: that is, we are guaranteed that the number ofsymbols in c̃ is at most some constant e = O(1), and further that c_i = c̃_i whenever c̃_i ≠.As before, the goal is to recover c_i using as few queries as possible to c̃.Batch codes <cit.> and PIR codes <cit.> are other variants that are interesting in this parameter regime.A key question in both of these lines of work is how to achieve these recovery guarantees with as high a rate as possible.The rate of a code ∈Σ^N is defined to be the ratio log_|Σ|(||) / N; it captures how much information can be transmitted using such a code.In other words, given N, we seek to find a ⊆Σ^N with good locality properties, so that || is as large as possible.In the context of the second line of work above, recent work <cit.> has studied (both implicitly and explicitly)the trade-off between rate and something called the s-Disjoint-Repair-Group-Property (s-DRGP) for small s. Informally,has the s-DRGP if any symbol c_i can be obtained from s disjoint query sets c|_S_1, c|_S_2, …, c|_S_s for S_i ⊆ [N].(Notice that there is no explicit bound on the size of these query sets, just that they must be disjoint).One observation which we will make below is that the s-DRGP provides a natural way to interpolate between the first (LCC) setting and the second (LRC) setting above.More precisely, while the LRC setting corresponds to small s (usually, s = O(1)), the LCC setting is in fact equivalent to the case when s = Ω(N).This observation motivates the study of intermediate s, which is the goal in this paper. Contributions. Before we give a more detailed overview of previous work, we outline the main contributions of this paper. * Constructions of codes with the s-DRGP for intermediate s. We give a construction of a family of codes which have the s-DRGP for s ∼ N^1/4.Our construction can achieve a higher rate than previous constructions with the same property. * A general framework, based on partially lifted codes. Our codes are based on a novel variant of the lifted codes of Guo, Kopparty and Sudan <cit.>.In that work, with the goal of obtaining LCCs, the authors showed how to construct affine-invariant codes by a “lifting" operation.In a bit more detail, their codes are multivariate polynomial codes, whose entries are indexed by _q^m (so N = q^m).These codes have the property that, the restriction of each codeword to every line in _q^m is a codeword of a suitable univariate polynomial code.(For example, a Reed-Muller code is a subset of a lift of a Reed-Solomon code; the beautiful insight of <cit.> is that in fact the lifted code may be much larger.)In our work, we introduce a version of the lifting operation where we only require that the restriction to some lines lie in the smaller code, rather than the restriction to all lines; we call such codes “partially lifted codes."This partial lifting operation potentially allows for higher-rate codes, and, as we will see, it naturally gives rise to codes with the s-DRGP.One of our main contributions is the introduction of these codes, as well as some machinery which allows us to control their rate.We instantiate this machinery with a particular example, in order to obtain the construction advertised above.We can also recover previous results in the context of this machinery. * Putting the study of the s-DRGP in the context of LRCs and LCCs.While the s-DRGP has been studied before, to the best of our knowledge, it is not widely viewed as a way to interpolate between the two settings described above.One of the goals of this paper is to highlight this property and its potential importance to our understanding of locality, both from the LRC/batch code/PIR code side of things, and from the LCC side.§.§ Background and related work As mentioned above, in this work we study the s-Disjoint-Repair-Group Property (s-DRGP). We begin our discussion of the s-DRGP with some motivation from the LRC end of the spectrum, from applications in distributed storage.The following model is common in distributed storage: imagine that each server or node in a distributed storage system is holding a single symbol of a codeword c ∈.Over time, nodes fail, usually one at a time, and we wish to repair them (formally, recovering c_i for some i).Moreover, when they fail, it is clear that they have failed.This naturally gives rise to the second parameter regime described above, where c̃ has a constant number of erasures.Locally recoverable (or repairable) codes (LRCs) <cit.> were introduced to deal with this setting.The guarantee of an LRC[In some works, the guarantee holds for information symbols only, rather than for all codeword symbols; we stick with all symbols here for simplicity of exposition.] with locality Q is that for any i ∈1,…,n, the i'th symbol of the codeword can be determined from a set of at most Q other symbols. There has been a great deal of work recently aimed at pinning down the trade-offs between rate, distance, and the locality parameter Q in LRCs.At this point, we have constructions which have optimal trade-offs between these parameters, as well as reasonably small alphabet sizes <cit.>.However, there are still many open questions; a major question is how to handle a small number of erasures, rather than a single erasure. This may result from either multiple node failures, or from “hot" data being overloaded with requests.There are several approaches in the literature, but the approach relevant to this work is the study of multiple disjoint repair groups. Given a code ⊂Σ^N, we say that a set S ⊂1,…,N is a repair group for i ∈1,…,N if i ∉S, and if there is some function g:Σ^|S|→Σ so that g(c|_S) = c_i for all c ∈.That is, the codeword symbols indexed by S uniquely determine the symbol indexed by i.We say thathas the s-Disjoint-Repair-Group Property (s-DRGP) if for every i ∈1,…,N, there are s disjoint repair groups S_1^(i),…,S_s^(i) for i. In the context of LRCs, the parameter s is called the availability of the code. An LRC with availability s is not exactly the same as a code with the s-DRGP (the difference is that, in Definition <ref>, there is no mention of the size Q of the repair groups), but it turns out to be deeply related; it is also directly related to other notions of locality in distributed storage (like batch codes), as well as in cryptography (like PIR codes).We will review some of this work below, and we point the reader to <cit.> for a survey of batch codes, PIR codes, and their connections to LRCs and the s-DRGP. While originally motivated for small s, as we will see below, the s-DRGP is interesting (and has already been implicitly studied) for a wide range of s, from O(1) to Ω(N).For s = o(N), we can hope for codes with very high rate, approaching 1; the question is how fast we can hope for this rate to approach 1.More formally, if K = log_|Σ|||, then the rate is K/N, and we are interested in how the gap N-K behaves with N and s. We will refer to the quantity N-K as the co-dimension of the code; whenis linear (that is, when Σ = is a finite field and ⊆^N is a linear subspace), then this is indeed the co-dimension ofin ^N.The main question we seek to address in this paper is the following. For a given s and N, what is the smallest codimension N - K of any code with the s-DGRP?In particular, how does this quantity depend on s and N?We know a few things about Question <ref>, which we survey below.However, there are many things about this question which we still do not understand.In particular, the dependence on s is wide open, and this dependence on s is the focus of the current work.Below, we survey the state of Question <ref> both from the LRC end (when s is small) and the LCC end (when s is large). The s-DRGP when s is small. In <cit.>, the s-DRGP was explicitly considered, with a focus on small s (s=2 is of particular interest).In those works, some bounds on the rate and distance of codes with the s-DRGP were derived (some of them in terms of the locality Q).However, for larger s, these bounds degrade. More precisely, <cit.> establish bounds on N - K in terms of Q,s, and the distance of the code, but as s grows these are not much stronger than the Singleton bound. The results of <cit.> give an upper bound on the rate of a code in terms of Q and s.One corollary is that the rate satisfies K/N ≤ (s+1)^-1/Q; if we are after high-rate codes, this implies that we must take Q= Ω( ln(s + 1) ), and this implies that the codimension N - K must be at least Ω(N ln(s) / Q).A similar notion to the s-DRGP was introduced in <cit.>, with the application of Private Information Retrieval (PIR).PIR schemes are an important primitive in cryptography, and they have long been linked to constant-query LCCs.In <cit.>, PIR was also shown to be related to the s-DRGP.The work <cit.> introduces PIR codes, which enable PIR schemes with much less storage overhead.It turns out that the requirement for PIR codes is very similar to the s-DRGP.[The only difference is that PIR codes only need to recover information symbols, but possibly with non-systematic encoding.]In the context of PIR codes <cit.>, there are constructions of s-DRGP codes with N - K ≤ O(s √(N)).For s=2, this is known to be tight, and there is a matching lower bound <cit.>.However, it seems difficult to use this lower bound technique to prove a stronger lower bound when s is larger (possibly growing with N).The s-DRGP when s is large. As we saw above, when s is small then the s-DRGP is intimately related to LRCs, PIR codes and batch codes. On the other end of the spectrum, when s is large (say, Ω(N) or Ω(N^1-ϵ)) then it is related to LCCs.When s = Ω(N), then the s-DRGPis in fact equivalent to a constant-query LCC (that is, an LCC as described above, where the number of queries to c̃ is O(1)). The fact that the Ω(N)-DRGP implies aconstant-query LCC is straightforward: the correction algorithm to recover c_i is to choose a random j in 1,…,s and use the repair group S^(i)_j to recover c_i.Since in expectation the size of S^(i)_j is constant, we can restrict our attention only to the constant-sized repair groups.Then, with some constant probability none of the indices in S^(i)_j will be corrupted, and this success probability can be amplified by independent repetitions.The converse is also true <cit.>, and any constant-query LCC has the s-DRGP for s = Ω(n); in fact, this connection is one of the few ways we know how to get lower bounds on LCCs.When s is large, but not as large as Ω(N), there is still a tight relationship with LCCs. By now we know of several high-rate ((1 - α), for any constant α) LCCs with query complexity Q = N^ϵ for any > 0 <cit.> or even Q = N^o(1) <cit.>. It is easy to see[Indeed, suppose thatis an LCC with query complexity Q and error tolerance δ, and let s = δ N/Q. In order to obtain s disjoint repair groups for a symbol c_i from the LCC guarantee, we proceed as follows.First, we make one (randomized) set of queries to c; this gives the first repair group.Continuing inductively, assume we have found t ≤ s disjoint repair groups already, covering a total of at most tQ < δ N symbols.To get the t+1'st set of queries, we again choose at random as per the LCC requirement.These queries may not be disjoint from the previous queries, but the LCC guarantee can handle errors (and hence erasures) in up to δ N positions, so it suffices to query the points which have not been already queried, and treat the already-queried points as unavailable.We repeat this process until t reaches s = δ N/Q. ] that any LCC with query complexity Q has the s-DGRP for s = Ω(N/Q).Thus, these codes immediately imply high-rate s-DRGP codes with s = Ω(N^1-ϵ) or even larger.(See also <cit.>). Conversely,the techniques of <cit.> show how to take high-rate linear codes with the s-DGRP for s = Ω(N^1 - ϵ) and produce high-rate LCCs with query complexity O(N^ϵ') (for a different constant ϵ').These relationships provide some bounds on the codimension N - K in terms of s: from existing lower bounds on constant-query LCCs <cit.>, we know that any code with the s-DGRP and s = Ω(N) must have vanishing rate.On the other hand from high-rate LCCs, there exist s-DGRP codes with s = Ω(N^1 - ϵ) and with high rate. However, these techniques do not immediately given anything better than high (constant) rate, while in Question <ref> we are interested in precisely controlling the co-dimension N - K. The s-DGRP when s is intermediate. The fact that the s-DRGP interpolates between the LRC setting for small s and the LCC setting for large s motivates the question of the s-DGRP for intemediate s, say s = log(N) or s = N^c for c < 1/2.Our goal is to understand the answer to Question <ref> for intermediate s.We have only a few data points to answer this question.As mentioned above, the constructions of <cit.> show that there are codes with N - K ≤ s√(N) for s ≤√(N).However, the best general lower bounds that we have <cit.> can only establish N - K ≥max√(2N) , N - N/(s+1)^1/Q.Above, we recall that Q is a parameter bounding the size of the repair groups;in order for the second term above (from <cit.>) to be o(N), we require Q ≫ln(s+1); in this case, the second bound on the codimension reads N - K ≥Ω(Nln(s)/Q). As the size of the repair groups Q may in general be as large as N/s, in our setting this second bound gives better dependence on s, but worse dependence on N. The upper bound of s √(N) is not tight, at least for large s.For s = √(N), there are several classical constructions which have the s-DRGP and with N - K = Θ( N^log_4(3)); for example, this includes affine geometry codes and/or codes constructed from difference sets (see <cit.>, <cit.>, or <cit.>—we will also recover these in Corollary <ref>).Notice that this is much better than the upper bound of N - K ≤ s√(N), which for s = √(N) would be trivial.However, other than these codes, before this work we did not know of any constructions for s ≪√(N) which beat the bounds in <cit.> of N - K ≤ s√(N).[We note that there have been some works in the intermediate-s parameter regime which can obtain excellent locality Q but are not directly relevant for Question <ref>.In particular, the work of <cit.> gives a construction of s-DRGP codes with s = Θ(K^1/3 - ) and Q = Θ(K^1/3) for arbitarily small constant ; while this work obtains a smaller Q than we will eventially obtain (our results will have Q ∼√(N)), they are only able to establish high (constant) rate codes, and thus do not yield tight bounds on the co-dimension. The work of <cit.> gives constructions of high-rate fountain codes which have s,Q = Θ(log(N)).As these are rateless codes, again they are not directly relevant to Question <ref>.] One of the main contributions of this work is to give a construction with s = N^1/4, which achieves codimension N - K = N^0.714.Notice that the bound of s √(N) would be N^0.75 in this case, so this is a substantial improvement.We remark that we do not believe that our construction is optimal, and unfortunately we don't have any deep insight about the constant 0.714.Rather, we stress that the point of this work is to ( a) highlight the fact that the s √(N) bound can be beaten for s ≪√(N), and (b) highlight our techniques, which we believe may be of independent interest.We discuss these in the next section.§.§ Lifted codes, and our constructionOur construction is based on the lifted codes of Guo, Kopparty and Sudan <cit.>.The original motivation for lifted codes was to construct high-rate LCCs, as described above.However, since then they have found several other uses, for example list-decoding and local-list-decoding <cit.>. The codes are based on multivariate polynomials, and we describe them below.Suppose that ℱ⊆_q[X,Y] is a collection of bivariate polynomials over a finite field _q of order q.This collection naturally gives rise to a code ⊆^q^2:= ⟨ P(x,y) ⟩_(x,y) ∈_q^2P ∈ℱ.Above,we assume some fixed order on the elements of _q^2, and by ⟨ P(x,y) ⟩_(x,y) ∈_q^2, we mean the vector in _q^q^2 whose entries are the evaluations of P in this prescribed order. For example, a bivariate Reed-Muller code is formed by taking ℱ to be the set of all polynomials of total degree at most d.One nice property of Reed-Muller codes is their locality.More precisely, suppose that P(X,Y) is a bivariate polynomial over _q of total degree at most d.For an affine line in _q^2, parameterized as L(T) = ( α T + β, γ T + δ), we can consider the restriction P|_L of P to L, given byP|_L(T) := P( α T + β, γ T + δ)T^q - T,where we think of the above as a polynomial of degree at most q-1. It is not hard to see that if P has total degree at most d, then P|_L(T) also has degree at most d; in other words, it is a univariate Reed-Solomon codeword.This property—that the restriction of any codeword to a line is itself a codeword of another code—is extremely useful, and has been exploited in coding theory since Reed's majority logic decoder in the 1950's <cit.>. A natural question is whether or not there exist any bivariate polynomials P(X,Y) other than those of total degree at most d which have this property.That is, are there polynomials which have high degree, but whose restrictions to lines are always low-degree?In many settings (for example, over the reals, or over prime fields, or over fields that are large compared to the degrees of the polynomials) the answer is no.However, the insight of <cit.> is that there are settings—high degree polynomials over small-characteristic fields—for which the answer is yes.This motivates the definition of lifted codes, which are multivariate polynomial evaluation codes, all of whose restrictions to lines lie in some other base code.Guo, Kopparty and Sudan showed that, in the case above, not only do these codes exist, but in fact they may have rate much higher than the corresponding Reed-Muller code.Lifted codes very naturally give rise to codes with the s-DRGP.Indeed, consider the bivariate example above, with d = q-2.That is,is the set of codewords arising from evaluations of functions P that have the property that for all lines L: _q →_q^2, (P|_L) ≤ q-2.The restrictions then lie in the parity-check code: we always have ∑_t ∈_q P|_L(t) = 0.Thus, for every coordinate of a codeword in —whichcorresponds to an evaluation point (x,y) ∈_q^2—there are q disjoint repair groups for this symbol, corresponding to the q affine lines through (x,y).However, it's not obvious how to use these codes to obtain the s-DRGP for s ≪√(N); increasing the number of variables causes s to grow, and this is the approach taken in <cit.> to obtain high-rate LCCs. Since we are after smaller s, we take a different approach. We stick with bivariate codes, but instead of requiring that the functions P ∈ℱ restrict to low-degree polynomials on all affine lines L, we make this requirement only for some lines.This allows us to achieve the s-DRGP (if there are s lines through each point), while still being able to control the rate.We hope that our construction—and the machinery we develop to get a handle on it—may be useful more generally.In the next section, we will set up our notation and give an outline of this approach, after a brief review of the notation we will use throughout the paper. Outline. Next, in Section <ref>, we define partially lifted codes, and give a technical overview of our approach.This approach consists of two parts.The first is a general framework for understanding the dimension of partially lifted codes of a certain form, which we then discuss more in Section <ref>.The second part is to instantiate this framework, which we do in Section <ref>.The bulk of the work, in Section <ref> is devoted to analyzing a particular construction which will give rise to that s-DRGP code with s = N^1/4 described above.In Section <ref> we mention a few other ways of constructing codes within this framework that seem promising.§ TECHNICAL OVERVIEW In this section, we give a high-level overview of our construction and approach.We begin with some basic definitions and notation.§.§ Notation and basic definitionsWe study linear codes ⊆_q^N of block length N over an alphabet of size q.We will always assume that _q has characteristic 2, and write q = 2^ℓ.(We note that this is not strictly necessary for our techniques to apply—the important thing is only that the field is of relatively small characteristic—but it simplifies the analysis, and so we work in this special case).The specific codesthat we consider are polynomial evaluation codes. Formally, let ℱ be a collection of m-variate polynomials over _q. Letting N = q^m, we may identify ℱ with a code ⊆_q^N as in (<ref>); we assume that there is some fixed ordering on the elements of _q^m to make this well-defined.For a polynomial P ∈_q[X_1,…,X_m], we write its corresponding codeword as(P) = ⟨ P(x_1,…,x_m) ⟩_(x_1,…,x_m)∈_q^m∈.We will only focus on m = 1,2, as we consider the restriction of bivariate polynomial codes to lines, which results in univariate polynomial codes.Formally, a (parameterization of an) affine line is a map L: _q →_q^2, of the formL(T) = (α T + β, γ T + δ )for α,β,γ,δ∈_q.We say that two parameterizations L, L' are equivalent if the result in the same line as a set:L(t)t ∈_q= L'(t)t ∈_q .We denote the restriction of a polynomial P ∈_q[X,Y] to L by P|_L: For a line L : _q →_q^2 with L(T) = (L_1(T), L_2(T)), and a polynomial P : _q^2 →_q, we define the restriction of P on L, denoted P|_L : _q →_q, to be the unique polynomial of degree at most q-1 so that P|_L(T) = P(L_1(T), L_2(T)).We note that the definition above makes sense, because all functions f:_q →_q can be written as polynomials of degree at most q-1 over _q; in this case, we have P|_L(T) = P(L_1(T), L_2(T))(T^q - T).Finally, we'll need some tools for reasoning about integers and their binary expansions. Let m < q be a positive integer. If m = ∑_i = 0^ł -1 m_i2^i, where m_i ∈{0, 1}, then we let B(m) = {i ∈{0, ..., ł - 1 }| m_i = 1}. That is, B(m) is the set of indices where the binary expansion of m has a 1.We say that an integer m lies in the 2-shadow of another integer n if B(m) ⊆ B(n): For any two integers m, n < q, we say that m lies in the 2-shadow of n, denoted m ≤_2 n, if B(m) ⊆ B(n). Equivalently, letting m =∑_i = 0^ł - 1m_i 2^i and n = ∑_i = 0^ł - 1n_i 2^i, we write m ≤_2 n if for all i ∈{0, ..., ł - 1}, whenever m_i = 1 then also n_i = 1. The reason that we are interested in 2-shadows is because of Lucas' Theorem, which characterizes when binomial coefficients are even or odd.[Lucas' Theorem holds more generally for any prime p, but in this work we are only concerned with p=2, and so we state this special case here.] For any m, n ∈ℤ, mn≡ 0 2 exactly when m ≰_2 n. Finally, for integers a,b,s, we will say a ≡_s b if a is equal to b modulo s.For a positive integer n, we use [n] to denote the set [n] = 0,…, n-1. §.§ Partially lifted codesWith the preliminaries out of the way, we proceed with a description of our construction and techniques. As alluded to above, our codes will be bivariate polynomial codes, which are “partial lifts" of parity check codes. Let ℱ_0 ⊆_q[T] be a collection of univariate polynomials, and let ℒ be a collection of parameterizations of affine lines L:_q →_q^2.We define the partial lift of ℱ_0 with respect to ℒ to be the set ℱ = P∈_q[X,Y] ∀ P ∈ℱ, ∀ L ∈ℒ, P|_L ∈ℱ_0 . We make a few remarks about Definition <ref> before proceeding. We remark that the definition above allows ℒ to be a collection of parameterizations of lines.A priori, it is possible that equivalent parameterizations may behave very differently with respect to ℱ_0, and it is also possible to include several equivalent parameterizations in ℒ. In this work, ℱ_0 will always be affine-invariant (in particular, it will just be the set of polynomials of degree strictly less than q-1), and so if L and L' equivalent, then P|_L ∈ℱ_0 if and only if P|_L'∈ℱ_0. Thus, these issues won't be important for this work. This definition works just as well for m-variate partial lifts, and we hope that further study will explore this direction.However, as all of our results are for bivariate codes, we will stick to the bivariate case to avoid having to introduce another parameter. Let ℱ_0 := P ∈_q[X], (P) < q-1.Then it is not hard to see that the code _0 = (P)P ∈ℱ_0 is just the parity-check code,_0 = c ∈_q^q ∑_i =1^q c_i = 0.Indeed, for any d < q-1, we have ∑_x ∈_q x^d = 0.We will construct codes with the s-DRGP by considering codes that are partial lifts of ℱ_0.We first observe that such codes, with an appropriate set of lines ℒ, will have the s-DRGP.Indeed, suppose we wish to recover a particular symbol, given by P(x,y) for (x,y) ∈_q^2. Let L^(1), …, L^(s)∈ℒ be s distinct (non-equivalent) lines that pass through (x,y); say they are parameterized so that L^(j)(0) = (x,y).Then the s disjoint repair groups are the sets indices corresponding to S_j := { L^(j)(t) : t ∈_q ∖0}.For any P in the partial lift of ℱ_0, we haveP|_L(0) = ∑_t ∈_q ∖{0} P|_L(t),which means thatP(x,y) = ∑_(a,b) ∈ S_jP(a,b).That is, P(x,y) can be recovered from the coordinates of (P) indexed by S_j, as desired.Finally we observe that the S_j are all disjoint, as the lines are all distinct, and intersect only at (x,y).We summarize the above discussion in the following observation. Suppose that ℱ_0 = P ∈_q[T] (P) < q-1, and let ℒ be any collection of parameterizations of affine lines so that every point in _q^2 is contained in at least s non-equivalent elements of ℒ.Letℱbe the bivariate partial lift of ℱ_0 with respect to ℒ.Then the code 𝒞⊆_q^q^2 corresponding to ℱ is a linear code with the s-DRGP. To save on notation later, we say that a polynomial P:_q^2 →_q restricts nicely on a line L: _q →_q^2 if P|_L has degree strictly less than q-1. Thus, to define our construction, we have to define the collection ℒ of lines used in Definition <ref>.We will actually develop a framework that can handle a family of such collections, but for intuition in this section, let us just consider lines L(T) = (T, α T + β) where α lives in a multiplicative subgroup G_s of _q^* of size s, and β∈_q.That is, we are essentially restricting the slope of the lines to lie in a multiplicative subgroup.It is not hard to see that every point (x,y) ∈_q^2 has s non-equivalent lines in ℒ that pass through it.Following Observation <ref>, the resulting code will immediately have the s-DRGP.The only question is, what is the rate of this code?Equivalently, we want to know: How many polynomials P ∈_q[X,Y] have (P|_L) < q - 1for all L ∈ℒ, where ℒ is as described above?In <cit.>, Guo, Kopparty and Sudan develop some machinery for answering this question when ℒ is the set of allaffine lines.What they show in that work is that in fact the (fully) lifted code is affine-invariant, and is equal to the span of the monomials P(X,Y) = X^aY^b so that (P|_L) < q - 1 for all affine lines L.We might first hope that this is the case for partial lifts—but then upon reflection we would immediately retract this hope, because it turns out that we do not get any more monomials this way: Theorem <ref> establishes that if a monomial restricts nicely on even one line of the form (T, α T + β) (for nonzero α,β), then in fact it restricts nicely on all such lines. In fact, the partial lift is not in general affine-invariant, and this is precisely where we are able to make progress. More precisely, there may be polynomials P(X,Y) of the form P(X,Y) = X^a_1Y^b_1 + X^a_2Y^b_2 which are contained in the partial lift ℱ, but so that X^a_1Y^b_1, X^a_2Y^b_2∉ℱ. This gives us many more polynomials to use in a basis for ℱ than just the relevant monomials, and allows us to construct families ℱ of larger dimension. We emphasize that breaking affine-invariance is a key departure from <cit.>.In some sense, it is not surprising that we are able to make progress by doing this: the assumption of affine-invariance is one way to prove lower bounds on locality <cit.>. This is also where our techniques diverge from those of <cit.>. Because of their characterization of affine-invariant codes, that work focused on understanding the dimension of the relevant set of monomials.This is not sufficient for us, and so to get a handle on the dimension of our constructions, we must study more complicated polynomials.This may seem daunting, but we show—perhaps surprisingly—that one can make a great deal of progress by considering only the additional “more complicated" polynomials of the form (<ref>),which are arguably the simplest of the “more complicated" polynomials.In order to obtain a lower bound on the dimension of ℱ, our strategy get a handle on the dimension of the space of these binomials (<ref>).If we can show that there are many linearly independent such binomials, then the answer to Question <ref> must be “lots." Following this strategy, we examine binomials of the form (<ref>), and we ask, for which a_1,b_1,a_2,b_2 and which L(T) = (T, α T + β) does P(X,Y) restrict nicely?Our main tool is Lucas's Theorem (Theorem <ref>), which was also used in <cit.>.To see why this is useful, consider the restriction of a monomial P(X,Y) = X^aY^b to a line L(T) = (T, α T + β).We obtainP|_L(T) = T^a α T + β^b = ∑_i ≤ bbiα^i β^b-i T^a + i.Above, the binomial coefficient bj is shorthand for the sum of 1 with itself bj times.Thus, in a field of characteristic 2, this is either equal to 1 or equal to 0; Lucas's theorem tells us which it is. This means that our question reduces to asking, when does the coefficient of T^q-1 vanish?The above gives us an expression for this coefficient, and allows us to compute an answer, in terms of the binary expansions of a and b.So far, this is precisely the approach of <cit.>.From here, we turn to the binomials of the form (<ref>).When do these restrict nicely?As above, we may compute the coefficient of the T^q-1 term and examine it.Fortunately, when the set of lines ℒ is chosen as above, the number of linearly independent binomials that restrict nicely ends up having a nice expression, in terms of the number of non-empty equivalence classes of a particular relation defined by the binary expansion of the numbers 1, …, q-1; this is our main technical theorem (Theorem <ref>, which is proved in Section <ref>). The approach of Section <ref> holds for more general families than the ℒ described above; instead of taking α in a multiplicative subgroup of _q^*, we may alternately restrict β, or restrict both. However, numerical calculations indicated that the choice above (where α is in a multiplicative subgroup of order s) is a good one, so for our construction we make this choice and we focus on that for our formal analysis in Section <ref>. In order to get our final construction and obtain the results advertised above, it suffices to count these equivalence classes. For the result advertised in the introduction, we choose the order of the multiplicative subgroup to be s = 2^ℓ/2 - 1= √(q) - 1. Then, we use an inductive argument in Section <ref> to count the resulting equivalence classes, obtaining the bounds advertised above. More precisely, we obtain the following theorem. Suppose that q = 2^ℓ for even ℓ, and let N = q^2 - 1.There is a linear codeover _q of length N and dimension K ≥ N - O(N^.714) which has the s-DRGP for s = √(q) - 2 =(N + 1)^1/4 - 1.We note that the statement of the theorem differs slightly from the informal description above; in our analysis, we will puncture the origin, and ignore lines that go through the origin; that is, our codes will have length q^2 -1, rather than q^2, and the number of lines through every point will be s-1, rather than s, as it makes the calculations somewhat easier and does not substantially change the results. §.§ Discussion and open questionsBefore we dive into the technical details in Section <ref>, we close the front matter with some discussion of open questions left by our work and our approach.We view the study of the s-DRGP for intermediate s to be an important step in understanding locality in general, since the s-DRGP nicely interpolates between the two extremes of LRCs and LCCs. When s=2, we completely understand the answer to Question <ref>.However, by the time s reaches Ω(N), this becomes a question about the best rate of constant-query LCCs, which is a notoriously hard open problem.It is our hope that by better understanding the s-DRGP, we can make progress on these very difficult questions.The main question left by our work is Question <ref>, which we do not answer.What is the correct dependence on s in the codimension of codes with the s-DRGP?We have shown that it is not s√(N), even for s ≪√(N).However, we have no reason to believe that our construction is optimal.Our work also raises questions about partially lifted codes.These do not seem to have been studied before.The most immediate question arising from our work is to improve or generalize our approach; in particular, is our analysis tight? Our approach proceeds by counting the binomials of the form (<ref>).This is in principle lossy, but empirical simulations suggest that at least in the setting of Theorem <ref>, this approach is basically tight.Are there situations in which this is not tight?Or can we prove that it is tight in any situation? Finally, are there other uses of partially lifted codes?As with lifted codes, we hope that these prove useful in a variety of settings. § FRAMEWORKAs discussed in the previous section, the proof of Theorem <ref> is based on the partially lifted codes of Definition <ref>. In this section, we lay out the partially lifted codes we consider, as well as the basic tools we need to analyize them. As before, we say that a polynomial P:_q^2 →_q restricts nicely to a line L: _q →_q^2 if P|_L has degree strictly less than q-1. We will consider partial lifts of the parity-check code with respect to a collection of affine lines ℒ; reasoning about the rate of this code will amount to reasoning about the polynomials which restrict nicely to lines in ℒ.To ease the computations, we will form our family ℒ out of lines that have a simple parameterization:We say a line L : →^2 is simple if it can be written in the form L(T) = (T,T + )̱, with , ≠̱0.Notice that this rules out lines through the origin.At the end of the day, we will pucture our code at the origin to achieve our final result. Note also that no two simple parameterizations of lines are equivalent to each other (that is, they form distinct lines as sets), so as we go forward, we may apply Observation <ref> without worry of the repair groups coinciding.We consider a family of constructions, indexed by parameters s and t, so that s,tq-1. This family will be the partial lift with respect to the following set of simple lines. Let ℒ_s,t be the family of simple linesℒ_s,t =L(T) = (T ,T + )̱α∈ G_s, β∈ G_t .For the rest of the paper, we will study the following construction, for various choices of s and t.x Suppose that s,tq-1, and let G_s, G_t ≤_q^* be multiplicative subgroups of _q^* of orders s and t, respectively.That is, G_s = x ∈_q^*x^s = 1 and G_t = x ∈_q^*x^t = 1. Let ℒ_s,t be as in Definition <ref>, and let ℱ_0 be the set of univariate polynomials of degree strictly less than q-1.Define ℱ_s,t to be the partial lift of ℱ_0 with respect to ℒ_s,t. Our main theorem, which we will prove in the rest of this section, is a characterization of the dimension of ℱ_s,t as in Construction <ref>.(We recall the definition of ≤_2 from Definition <ref> above). Suppose that s,tq-1.For nonnegative integers i < s, j < t, define e(s,t) =(i,j)i < s,andj < t,so that there is somem ,n ∈ [q]^2with m ≡_s i, n ≡_t j,andn ≤_2 m. Then the dimension of ℱ_s,t⊆_q[X, Y] is at least (ℱ_s,t) ≥ q^2 - e(s,t). Theorem <ref> may seem rather mysterious.As we will see, the reason for the expression e(s,t) is because it comes up in counting the number of binomials of the form (<ref>) the restrict nicely on lines in ℒ_s,t. We'll re-state Theorem <ref> later as Theorem <ref> in Section <ref>, after we have developed the notation to reason about e(s,t), and we will prove it there.The reason that Theorem <ref> is useful is that for some s and t, it turns out to be possible to get a very tight handle on e(s,t). This leads to the quantitative result in Theorem <ref>, which we will prove in Section <ref>. For now, we focus on proving Theorem <ref>.Our starting point is the work of <cit.>; we summarize the relevant points below in Section <ref>.§.§ Basic Setup: Lucas' Theorem and MonomialsIn <cit.>, Guo, Kopparty and Sudan give a characterization of lifted codes.In our setting, their work shows that when the set ℒ is the set of all affine lines, then thelifted code ℱ is affine invariant and in fact is equal to the span of the monomials which restrict nicely.In the case where the number of variables is large, or the base code ℱ_0 is more complicated than a parity-check code, <cit.> provides some bounds, but it seems quite difficult to get a tight characterization of these monomials.However, for bivariate lifts of the parity-check code, it is actually possible to completely understand the situation, and this was essentially done in <cit.>.We review their approach here. First, we use Lucas' Theorem (Theorem <ref>) to characterize which monomials X^aY^b restrict nicely to simple lines. Theorem <ref> follows from the analysis in <cit.>; we provide the proof below for completeness.Suppose a + b < 2(q - 1) and let P(X, Y) = X^aY^b. Then for all simple lines L(T) = (T,T + )̱, P|_L has degree < q - 1 if and only if q - 1 - a ≰_2 b. Further, if q - 1 - a ≤_2 b, then P|_L is a degree q - 1 polynomial with leading coefficient ^-a^̱b + aLet L(T) = (T, α T + β) be a simple line, and let P(X,Y) = X^aY^b. We compute the restriction of P to L,and obtainP|_L = T^a( T + )̱^b =∑_i = 0^bbi^iT^a + i^̱b - i,where in the above the binomial coefficient bi is shorthand for the sum of 1 bi times in _q.Since q is of characteristic 2, this is either 0 or 1.Because a + b < 2(q - 1), the only i so that T^a + i = T^q - 1 is i = q - 1 - a. We thus compute the degree q-1 term as bq - 1 - a^q - 1 - a^̱b - (q - 1 - a)T^q - 1 = bq - 1 - a^- a^̱b + aT^q - 1. To see when this term vanishes, we turn to Lucas' Theorem (Theorem <ref>), which implies that bq - 1 - a is even exactly when q - 1 - a ≰_2 b. Because _q has characteristic 2, this means that bq-1 -a = 0 in _q, and so the coefficient on T^q - 1 is zero whenever q - 1 - a ≰_2 b, as desired.To see the second part of the theorem, suppose that q - 1 - a ≤_2 b, so Lucas' Theorem implies thatbq - 1 - a is odd, and is equal to 1 in _q. In this case, we can see that the coefficient on T^q - 1 is ^-a^̱b + a, as desired. Theorem <ref> implies that whether a monomial P(X,Y) = X^aY^b restricts nicely to a simple line L is independent of the choice of L. Thus it makes sense to consider this a property of the monomial itself.We say that a monomial P(X, Y) = X^aY^b with 0 ≤ a, b ≤ q - 1 is good if it restricts nicely on all simple lines. In Theorem <ref>, we required a + b < 2(q - 1), which does not cover the monomial P_*(X,Y) = X^q-1Y^q-1.However, in Definition <ref>, we allow a = b = q - 1, and in fact according to this definition P_*(X,Y) is good.Indeed, when we compute (X^q - 1Y^q - 1)|_L for any simple line L(T) = (T,T + )̱, there are two choices of i in (<ref>) that contribute to the T^q-1 term: i = 0 and i = q - 1. In particular, the coefficient on T^q - 1 isq - 1 0^0^̱q - 1 +q - 1q - 1^q - 1^̱0 = 1 + 1 = 0. Since we consider only simple lines in our work, we will count P_*(X,Y) as good, in addition to the monomials covered by Theorem <ref>.We note that there are lines L which are not simple for which (P_*) = q-1, so it would not be included as “good" in the analysis of <cit.>.(In their language, P_* does not live in the lift of the degree set {0,…,q-2}). There are q^2 - 3^ł + 1 good monomials.In order to count the number of good monomials, we notice that a monomial P(X, Y) = X^aY^b with a + b < 2(q - 1) is not good when q - 1 - a ≤_2 b, i.e., B(q - 1 - a) ⊆ B(b). Since B(q - 1 - a) is exactly the compliment of B(a), such a P is not good if any only if B(a)⊆ B(b). Suppose thatthe binary expansion of a is a_ł - 1⋯ a_1 a_0 and that the binary expansion of b is b_ł - 1⋯ b_1 b_0, with a_i, b_i ∈{0, 1}.If P is not good, then there are three options for each i:a_i = 0 and b_i = 1, or a_i = 1 and b_i = 0, or a_i = b_i = 1.This yields 3^ℓ monomials which are not good.However, we must exclude the case where a = b = q - 1, because (as per Remark <ref>),X^q - 1Y^q -1 is a good monomial. Thus there are 3^ł - 1 total monomials which are not good. Because there are q^2 monomials total, there are q^2 - 3^ł + 1 good monomials. At this point, we have recovered the codes of Theorem 1.2 in <cit.>, up to the technicalities about simple lines vs. all lines.Following Observation <ref>, these codes have the s-DRGP for s = q-1; indeed, there are q - 1 simple lines through every non-zero point of _q^2. The dimension of these codes is at least the number of monomials that they contain (indeed, all monomials are linearly independent), which by the above is at least q^2 - 3^ℓ + 1 = (N + 1) - (N+1)^log_4(3) + 1. There are codes linearover _q of length N = q^2 - 1with dimension K ≥ N + 2 - (N+1)^log_4(3) which have the s-DRGP for s = q - 1 = √(N + 1) - 1.We remark that this just as easily produces codes of length N = q^2 with the s-DRGP for s = q and dimension K = N - N^log_4(3), by allowing non-simple lines.In what follows, simple lines will be much easier to work with, so we state things this way above for continuity (even though it looks a bit more messy).We note that this recovers the results of one of the classical constructions of the s-DRGP for s = √(N) mentioned in the introduction (and this is not an accident: these codes are in fact the same as affine geometry codes).In the next section, we show how to use the relaxation to partial lifts in order to create codes with the s-DRGP for s ≪√(N). §.§ Partially lifted codesIn this section we extend the analysis above to partial lifts.The work of <cit.> characterizes the polynomials which restrict nicely on all linesL : _q →_q^2: they show that this is exactly the span of the good monomials (except the special monomial P_* of Remark <ref>, which restricts to degree lower than q - 1 only on simple lines).However, since our goal is to obtain codes with the s-DRGP for s ≪√(N), increasing the dimension while decreasing s, we would like to allow for more polynomials. Thus, as in Definition <ref>, we will consider polynomials which restrict nicely only on some particular subset Ł of simple lines. We would like to find a subset Ł such that the space of polynomials which restrict nicely on all lines in Ł has large degree. Additionally, we would like to guarantee the s-DRGP by ensuring that, for every point (x, y), there are many lines in Ł that pass through (x, y). Relaxing requirements in this manner will allow us to get codes with good rate and locality trade-offs.Theorem <ref> shows that if a monomial restricts nicely on one simple line, it will restrict nicely on all simple lines. This means that in order to find a larger space of polynomials, we cannot only consider monomials. Towards this end, we will consider binomials of the form P(X,Y) = X^a_1Y^b_1 + X^a_2Y^b_2. That is, we will look only at binomials with both coefficients equal to 1.We note that this ability to extend beyond monomials is possible crucially because our partially lifted codes are not affine-invariant.While affine-invariance allowed <cit.> to get a beautiful characterization of (fully) lifted codes, it also greatly restricts the flexibility of these codes. By breaking affine-invariance, we also break some of the rigidity of these constructions.This is in some sense not surprising: affine invariance is often exploited in order to prove lower bounds on locality <cit.>. §.§.§ Which binomials play nice with which lines? We would like to characterize which binomials of the form (<ref>) restrict nicely on which lines. Unlike the case with monomials, now this will depend on the line as well as on the binomial. When both individual terms in the binomial are good monomials, the binomial will certainly restrict nicely. However, if this is not the case,then the binomial could still restrict nicely, if the contributions to the leading coefficient of P|_L from the two terms cancel with each other. We start with a lemma characterizing the leading coefficient produced by restricting the sum of two not good monomials, and then state and prove Lemma <ref>, which characterizes which binomials restrict nicely to which lines. Suppose that P_1(X, Y) = X^a_1Y^b_1 and P_2(X, Y) = X^a_2Y^b_2 are not good monomials.LetL = (T,T + )̱ be a simple line.Then the coefficient of T^q - 1 in the restriction (P_1 + P_2)|_L is^-a_1^̱b_1 + a_1 + ^-a_2^̱b_2 + a_2. Because P_1 and P_2 are not good, Theorem <ref> implies that P_1|_L has degree q - 1 with leading coefficient ^-a_1^̱b_1 + a_1, and similarly P_2|_L has degree q - 1 with leading coefficient ^-a_2^̱b_2 + a_2. Because (P_1 + P_2)|_L = P_1|_L + P_2|_L, the coefficient of T^q - 1 in the restriction (P_1 + P_2)|_L is in fact ^-a_1^̱b_1 + a_1 + ^-a_2^̱b_2 + a_2, as desired. Lemma <ref> immediately implies the following characterization of when a binomial P of the form (<ref>) restricts nicely to a simple line L. Let P(X, Y) = X^a_1Y^b_1 + X^a_2Y^b_2, and let L = (T,T + )̱ be a simple line.Then P restricts nicely to Lif and only if one of the following two conditions is met:(a) both X^a_1Y^b_1 and X^a_2Y^b_2 are good, or(b) neither X^a_1Y^b_1 nor X^a_2Y^b_2 are good, and^a_2 - a_1 = ^̱b_2 - b_1 + a_2 - a_1 First, we show that if P and L meet either of these conditions, then P|_L has degree less than q - 1.If P and L meet condition (a), then this follows immediately from the definition of a good monomial. On the other hand, if P and L meet condition (b),then this follows from Lemma <ref>, using the assumption that , ≠̱0.If ^a_1 - a_1 = ^̱b_2 - b_1 + a_2 - a_1, then ^-a_1^̱b_1 + a_1 = ^-a_2^̱b_2 + a_2.Because _q has characteristic 2, the coefficient on T^q-1 of P|_L is^-a_1^̱b_1 + a_1 + ^-a_2^̱b_2 + a_2 = 0.For the other direction, we show that if a binomial X^a_1Y^b_1 + X^a_2Y^b_2 does not meet either condition on some line L, then the restrictionP|_L will have a non-zero coefficient on T^q - 1. If exactly one of the terms of the binomial is a good monomial, the coefficient on T^q - 1 will be determined by the other term and cannot be zero. This leaves only the case where neither term restricts nicely along L, but ^a_2 - a_1≠^̱b_2 - b_1 + a_2 - a_1, which implies that the coefficient on T^q-1 in P|_L is ^-a_1^̱b_1 + a_1 + ^-a_2^̱b_2 + a_2≠ 0. Thus, P does not restrict nicely on L. We are now primarily interested in how to satisfy (<ref>) in the second case of Lemma <ref>. We will do this by focusing on the special case[ One might ask, why this special case?Why not consider ^a_1 - a_1 = ^̱b_2 - b_1 + a_2 - a_1 = c for some c ∈_q^* ∖1? This divides the lines and binomials into equivalence classes based on the value of c, so that each class of binomials restrict nicely on every line in the corresponding class of lines. Unfortunately, it turns out that these classes are relatively small, and do not produce useful codes.Thus we focus on the case where both sides are equal to 1. ] where ^a_1 - a_1 = ^̱b_2 - b_1 + a_2 - a_1 = 1. We address this case with the next corollary. Let s and t divide q - 1, and let G_s = {x ∈_qx^s = 1} and G_t = {x ∈ F_qx^t = 1}.Let = {(T,T + )̱∈ G_s, ∈̱G_t}as in Definition <ref>. Suppose that P(X, Y) = X^a_1Y^b_1 + X^a_2Y^b_2 is a binomial so that neither term is good. Suppose that a_1 ≡ a_2 s and a_1 + b_1 ≡ a_2 + b_2 t. Then for all L ∈, P restricts nicely to L.Let L ∈ be a simple line with L(T) = (T,T + )̱. If a_1 ≡ a_2 s and a_1 + b_1 ≡ a_2 + b_2 t, we know that a_2 - a_1 = c_1s and b_2 - b_1 + a_2 - a_1 = c_2 t for some integers c_1 and c_2. Then, because ∈ G_s and ∈̱G_t, we can see that ^a_2 - a_1 = ^c_1s = 1, and similarly ^̱b_2 - b_1 + a_2 - a_1 = ^̱c_2t = 1. Thus ^a_2 - a_1 = ^̱b_2 - b_1 + a_2 - a_1, so P|_L must have degree less than q - 1. Thus, a choice of s and t dividing q-1 produces a code by usingin Construction <ref>. Each choice of s and t produces a different code, and by varying s and t we can vary the parameters of this code. This is the general framework for our construction, but we still must explore the dimension and the number of disjoint repair groups produced by different choices of s and t.§.§.§ Dimension Given some choice of s and t, we would like to understand dimension of the space of polynomials ℱ_s,t which restrict nicely on all lines in . We will lower bound this dimension by building a linearly independent set S ⊆ℱ_s,t comprised of monomials and binomials. In order to construct S and understand its size, we will need some more notation. Let i < s and j < t be nonnegative integers. DefineE_i, j = {(m,n) ∈ [q]^2m ≡_s i, n ≡_t j,n ≤_2 m}.Further define e(s, t) to be e(s,t) = | (i,j)E_i,j≠∅ |the number of (i, j) with E_i, j nonempty, i.e., so that it is possible to find at least some m ≡_s i and n ≡_t j so that n ≤_2 m.We will call two monomials P_1(X, Y) = X^a_1Y^b_1 and P_2(X, Y) = X^a_2Y^b_2potentially canceling if a_1 ≡ a_2 s and a_1 + b_1 ≡ a_2 - b_2 t.That is, P_1 and P_2 are potentially canceling if the second hypothesis of Corollary <ref> holds. However, we note that while Corollary <ref> applies only to binomials where neither term is good, Definition <ref> holdseven if one or both of the monomials is good.This explains the reason for the name potentially canceling: two potentially canceling monomials will cancel if neither is good, but may not cancel if one or both are good. This notion of potentially canceling gives us equivalence relation on monomials. In particular, we can divide monomials into equivalence classesM_i, j = { X^aY^b | a ≡_s i, b + a ≡_t j },so that any two monomials in M_i, j are potentially canceling.In order to get a handle on the binomials P_1 + P_2 which do cancel, we will first get a handle on the M_i,j, and how the good monomials are distributed between them.Note that the equivalence classes M_i, j divide up the q^2 total monomials into st classes, and it is easy to characterize how many monomials are in each class. However, it is not necessarily so easy to describe how the good monomials are distributed into these classes. Moreover, understanding this distribution will be critical to understanding the dimension of ℱ_s, t. Toward this end, we will explore the relationship between Definitions <ref> and <ref>. The number of monomials in M_i,j which are not good is equal to |E_i, j|, except for M_0, 0, where the number of monomials which are not good is |E_0, 0| - 1.Recall that a monomial X^aY^b with a + b ≤ 2(q - 1) is not good when q - 1 - a ≤_2 b.From the definition of ≤_2, this means that B(q - 1 - a) ⊆ B(b), or equivalently that B(a)⊆ B(b), using the fact that B(q - 1 - a) is the complement of B(a).Thus, for every place in which the binary expansion of a has a 0, the binary expansion of b must have a 1.We can then think of dividing B(b) into two subsets: B(b) ∖ B(a) and B(b) ∩ B(a).Since B(a)⊆ B(b), we have B(a)⊆ B(b) ∖ B(a).Moreover, B(a) = [ℓ] ∖ B(a) ⊇ B(b) ∖ B(a), and so B(q -1 -a) = B(a) = B(b) ∖ B(a).That is,the value whose binary expansion is given by B(b) ∖ B(a) is precisely q - 1 - a.Let c be the value so that B(c) = B(b) ∩ B(a). Then B(b) is the disjoint union of B(c) and B(b) ∖ B(a) = B(q - 1 - a).In other words, b = q-1-a + c, and sob ≡ c - a q - 1.Because b + a ≡ j t and t | (q - 1), this implies that c ≡ j t. Moreover, c ≤_2 a. In fact, any choice of c ≤_2 a and c ≡ j t would yield a unique b so that X^aY^b ∈ M_i, j and X^aY^b is not good. But E_i, j exactly counts how many pairs of a, c there are which meet these requirements!This implies that the number of monomials in M_i,j which are not good (or else which are our special case of P_*(X,Y) = X^q-1Y^q-1 from Remark <ref>) is equal to |E_i,j|. To deal with this special case, recall that X^q - 1Y^q - 1 is good even though it does not fit nicely into our characterization; thus the logic above counts it as “not good." Because s and t both divide q - 1, this monomial is in M_0, 0. Thus, the number of monomials in M_0, 0 which are not good is |E_0, 0| - 1. Armed with our new notation and the above lemma, we will proceed to state and prove a general bound on the dimension of ℱ_s, t. Let ℱ_s,t be the partial lift of ℱ_0 with respect to ℒ_s,t, as in Construction <ref>; that is, ℱ_s,t is the set of polynomials in _q[X,Y] that restrict nicely to all lines L in .Then the dimension ℱ_s, t is at least q^2 - e(s, t), where e(s,t) is as in Definition <ref>. We exhibit a large linearly independent set of polynomials (both monomials and binomials) contained in ℱ_s,t. Let us begin building up our linearly independent set by adding in all the good monomials. These restrict nicely on every line, so they certainly restrict nicely on . Moreover, they are all linearly independent. For convenience, let us call the number of good monomials g. (The actual value was calculated in Corollary <ref>, but will not matter here.)Now, we would like to count how much more we get by adding in binomials of the form (<ref>). We can find a linearly independent set of binomials by considering each M_i, j separately. Within each equivalence class, we can select a single representative monomial, and combine it with every other monomial in that class to create our set of binomials. However, we only want to combine two monomials when neither are already good. A binomial formed by adding two good monomials will restrict nicely, but is not linearly independent from the set of good monomials which we have already included. A binomial formed by adding one good monomial with one which is not good will not actually restrict nicely at all. Thus, for every pair (i, j), we will look at the subset of M_i, j which is not good, pick a representative monomial, and combine it with every other monomial. In fact, we know that there are |E_i, j| not-good monomials in M_i, j (putting aside i = j = 0 for now). Thus whenever |E_i, j| ≠ 0 we can form |E_i, j| - 1 binomials which restrict nicely, and which are linearly independent both with each other and with the good monomials. The only exception here is that when i = j = 0, we must check if |E_0, 0| > 1, and if so we can form |E_0, 0| - 2 binomials.Summing over all these equivalence classes, the above reasoning shows that there is a set A of binomials of the form (<ref>) of size atleast ∑_|E_i, j| ≠ 0( |E_i, j| - 1) - 1 = ∑_i, j |E_i, j| - 1 - ∑_|E_i, j| ≠ 0 1,so that the elements of A restrict nicely to all lines in , are linearly independent, and moreover are linearly independent from the g good monomials already accounted for.Above, we have subtracted 1 to account for the case where i = j = 0. (If that set is completely empty, this correction is not necessary, but as we are only seeking a lower bound we can simply subtract 1 anyhow.) But we note that (∑_i, j|E_i, j|) - 1 is exactly the number of not-good monomials across all equivalence classes! In particular, the number of not good monomials is q^2 - g, since there are q^2 monomials total. Moreover, the count of non-empty classes E_i, j is exactly e(s, t). Then we can see that, in fact, our expression for the number of binomials simplifies to q^2 - g - e(s, t).Now consider the set S ⊆_q[X,Y] consisting of the polynomials in A along with the good monomials. The reasoning above shows that|S| = g + |A| ≥ g + q^2 - g - e(s,t) = q^2 - e(s,t).Further, since the elements of S are all linearly independent and S ⊆ℱ_s,t, we have established that (ℱ_s,t) ≥ q^2 - e(s,t), as desired. We now have an expression lower bounding the dimension of our code, but our expression depends on e(s, t). We would like to know that e(s, t) is not too big. It is easy to see that e(s, t) ≤ st, because there are only st choices for (i, j). Moreover, we know that e(s, t) ≤ q^2 - g = 3^ł - 1, the total number of not-good monomials.As we will see in Section <ref>, this first bound e(s,t) ≤ st is nontrivial, and can in fact recover the result of N - K = s √(N) of <cit.>.However, the point of all this work is that in fact we will be able to chooses and t so that we can get a much tighter bound on e(s, t), as we will explore in the next section.Using Observation <ref>, this will result in constructions of high-rate codes with the s-DRGP, and will prove Theorem <ref>.§ INSTANTIATIONSIn this section, we examine a few specific choices of s and t. We will focus (in Section <ref>) on the case where t = q-1 and s is a proper divisor of q-1.This will immediately allow us to obtain codes with the s-DRGP and with codimension N - K ≤ s √(N), recovering the result of <cit.>.However, in this setting we will also be able to get a much tighter bound on e(s,t),: we will specialize to a particular choice of s in Section <ref>, and this will allow us to establish the codes claimed in Theorem <ref>. Finally, in Section <ref>, we discuss other possible instantiations of our framework of Section <ref>.While our main quantitative results come from the first setting of Section <ref>, the point of Section <ref> is more to make the conceptual point that our approach applies more generally.We hope that some of these techniques might find other applications. §.§ The case where t = q - 1 One of the simplest choices we can make within our framework is to set t = q - 1, while s | q-1 is any divisor.That is, we consider all simple lines L(T) = (T, α T + β) where β may vary over all of _q^*, and where α∈ G_s lives in a multiplicative subgroup of _q^*. One reason that this choice is convenient is that it is easy to understand the number of disjoint repair groups. Let (x, y) ∈_q^2 ∖(0,0). Then there exist at least s - 1 lines in Ł_s, q - 1 which pass through (x, y). Further, no lines in Ł_s, q - 1 pass through (0, 0). Let (x,y) ∈_q^2 be nonzero.Then for each α∈ G_s, there is some β = y - α x so that L_α,β(x) = (x,y), where L_α,β(T) = (T, α T + β).All of these parameterizations L_α,β are non-equivalent, and at most one out of s lines has β = 0.Thus, the remaining s - 1 such lines have L_α, β(x) = (x,y), and L_α,β∈Ł_s,q-1. Finally, because any simple line L_α,β passing through the origin must have β= 0, there are no such lines in Ł_s,q-1. Observation <ref> immediately implies that the code ℱ_s,q-1 has the (s-1)-DRGP.It remains only to bound its dimension. Theorem <ref>, along with the observation of the previous section that e(s,q-1) ≤ s(q-1) trivially, immediately implies DRGP codes that match the results of <cit.>: Let q = 2^ℓ and let s | q-1.Then there is a linear codeover _q of length N = q^2 - 1 with dimension K ≥ N + 1 - s √(N+1) - 1 = N - O(s √(N)) so thathas the (s - 1)-DRGP. We consider the code = ⟨ P(x,y)⟩_(x,y)∈_q^2 ∖(0,0) P ∈ℱ_s,q-1. The fact thathas the (s-1)-DRGP follows from the same reasoning as Observation <ref>, combined with the fact from Lemma <ref> that every point in _q^2 ∖(0,0) lies on at least s-1 lines in Ł_s,q-1, and that these lines are not equivalent. The only point of concern is the fact that the origin is not included in the evaluation points; what if one of the lines (which is needed for the repair groups) passes through the origin?However, Lemma <ref> assures us that none of the lines in ℒ_s,q-1 pass through the origin, so the argument about repair groups still holds.Finally, the claim about the dimension follows from Theorem <ref> using N = q^2 - 1 and e(s,q-1) ≤ s(q-1). Thus, by choosing t = q-1, we can recover the quantitative results of <cit.>.However, as we will see in the next section, by choosing s carefully we can actually get a tighter bound on e(s,t), and this will allow us to go beyond Corollary <ref>.§.§ Choosing s: s = √(q) - 1, t = q - 1In this section, we continue to make the choice t = q-1, and we will choose s carefully in order to prove our main quantitative result of Theorem <ref>.For the reader's convenience, we duplicate the theorem below. Suppose that q = 2^ℓ for even ℓ, and let N = q^2 - 1.There is a linear codeover _q of length N and dimension K ≥ N - O(N^.714) which has the s-DRGP for s = √(q) - 2 =(N + 1)^1/4 - 1.Above in the proof of Corollary <ref>, we were able to give a general bound for any s | q-1. However, our analysis, which used the bound e(s,t) ≤ st, was quite loose. In this section, we analyze the particular case when s = √(q) - 1, which will allow us the achieve the better bound of Theorem <ref>. Given the proof of Corollary <ref>, Theorem <ref> follows immediately from the following bound on e(√(q) - 1, q - 1). Let q = 2^ℓ be an even power of 2.Thene(√(q) - 1, q - 1) = O(5 + √(5))^ℓ/2.Before we prove Theorem <ref>, we briefly comment on why it suffices to prove Theorem <ref>.As in the proof of Corollary <ref>, we chooseto be the code corresponding the ℱ_√(q)-1,q-1, with the origin punctured.As before,immediately has the (√(q) - 2)-DRGP.Moreover, the dimension ofis at leastK≥q^2 - e(√(q) -1, q - 1) ≥ 2^2ℓ - O (5 + √(5))^ℓ/2≥ 2^2ℓ - O2^ 2ℓ·log_2(5 + √(5))/4 ≥ (N+1) - O (N+1)^log_2(5 + √(5))/4≥ N - O(N^.714),where the last line follows from the computation log_2(5 + √(5))/4 ≈ 0.7138. This establishes Theorem <ref>, modulo the proof of Theorem <ref>. Let s = √(q) - 1. Let us recall what e(s, q - 1) is counting: e(√(q) - 1, q-1) = (i,j) ∈ [√(q) - 1] × [q-1] ∃ a ≡_s i, c ≡_q-1 j, c ≤_2 a.That is, we are counting the number of pairs (i,j) so that i < √(q)-1, j < q-1, so that there exists some a equivalent to i mod s, and so that c ≤_2 a. However, as c ∈0,…, q-1, and j < q-1, the condition requires c = j.Thus, somewhat more concisely we are countinge(√(q) - 1, q-1) = (i,c) ∈ [√(q) - 1] × [q-1] ∃ a ≡_s i,c ≤_2 a. For i < √(q) - 1, c < q-1, we say that the pair (i,c) is a valid pair if there exists some a < q so that c ≤_2 a and so that a ≡_s i.We say that a is a witness for (i,c)'s validity. That is, (i,c) is valid if it is counted in the right hand side of (<ref>).Thus, we wish to bound the number of valid pairs. However, as we will see, instead of counting the valid pairs (i,c) directly, it will be easier to transform them into valid triples, (y,c_0,c_1), and count those.We define a valid triple below in Definition <ref>; the idea is to break up the length-ℓ binary expansion of c into two parts, c_0 and c_1 of length ℓ/2.Additionally, it will be convenient to think of i < 2^ℓ/2 - 1 as a number y ∈ [2^ℓ/2] (that is, y may take on the value 2^ℓ/2 - 1). The argument will proceed by induction on the length of the binary expansion of these triples, so in the definition below we will let their length be an arbitrary integer r, rather than ℓ/2. Let r > 0 be an integer.A triple (y,c_0,c_1) ∈ [2^r]^3 is a valid triple of length r if there exists some a_0,a_1 ∈ [2^r] so that the following holds: * a_0 ≥_2 c_0 and a_1 ≥_2 c_1; and *a_0 + a_1 ≡ y2^r. We say that such an (a_0,a_1) is a witness for (y,c_0,c_1)'s validity.We illustrated Definition <ref> in Figure <ref>.As indicated above, the reason we care about valid triples (y,c_0,c_1) is that eventually for r = ℓ/2 they will allow us to count valid pairs (i,c). To see why this is plausible, suppose that (y,c_0, c_1) is a valid triple of length r = ℓ/2 with witness (a_0, a_1), and consider c = c_0 + c_1 2^ℓ/2 and a = a_0 + a_ 1 2^ℓ/2.Then c ≤_2 a, because the binary expansions of c and a are simply the concatenation of the binary expansions of c_0,c_1 and a_0,a_1 respectively.Moreover, we havea ≡ a_0 + a_12^ℓ/2 - 1, while y ≡ a_0 + a_12^ℓ/2. This slight difference in the moduli means that we don't immediately have a tight connection between valid triples and valid pairs; but we will see in Claim <ref> below that in fact getting a handle on valid triples will be enough to count the valid pairs. We will count the valid triples (y, c_0,c_1) using induction on r. To do this, we will divide up the triples into three classes, based on the witnesses (a_0,a_1).Define (r) to be the set of valid triples (y,c_0,c_1) as above so that the following occurs: For every witness (a_0,a_1) to the validity of (y,c_0,c_1), we have a_0 + a_1 ≤ 2^r - 1.To see why we have called this class , observe that this is the case when the addition a_0 + a_1 (as in the second requirement in Definition <ref>), does not have a carry to the r'th position when viewed as the addition of two r-bit binary numbers.In Figure <ref>, this is the case where d = 0 for all choices of a_0,a_1.Analogously, we define (r) to be the set of valid triples (y,c_0,c_1) so that for every witness (a_0,a_1) to the validity of (y,c_0,c_1), a_0 + a_1 > 2^r - 1.Finally, we define (r) to be the set of valid triples (y,c_0,c_1) so that there exists witnesses (a_0,a_1) for (y,c_0,c_1) with a_0 + a_1 > 2^r - 1, and there also exist witnesses with a_0 + a_1 ≤ 2^r - 1.With these definitions in place, we can make the following claim. The following identities hold: * |(r+1)| = 3 · |(r)|. * |(r+1)| = 3 · |(r)| + 6 · |(r)| + 4 · | (r)|. * |(r+1)| = |(r)| + |(r)| + 4 · |(r)|. Before we prove Claim <ref>, we show why it is sufficient to prove Theorem <ref>.This will explain why we chose to study triples rather than pairs. First, we observe that Claim <ref> allows us to count |(ℓ/2)|, |(ℓ/2)|, and |(ℓ/2)|.Indeed, whenr = 0, we have trivially that |(0)| = 1,and the other two classes are empty and so by induction we have[ |(ℓ/2)|; |(ℓ/2)|; |(ℓ/2)| ]=[ 3 0 0; 3 6 4; 1 1 4 ]^ℓ/2·[ 1; 0; 0 ].In order to turn these counts into a bound on the number of valid pairs, we use the following claim. The total number of valid pairs (i,c) is bounded by |(ℓ/2)| + |(ℓ/2)| + 2|(ℓ/2)|.Let (y, c_0, c_1) be a valid triple of length ℓ/2. We say that (y, c_0, c_1) supports a pair (i, c) if c = c_0 · 2^ℓ/2 + c_1 and there exists a witness (a_0, a_1) to the validity of (y, c_0, c_1) such that a_0 + a_1 ≡ i 2^ℓ/2 - 1. We will first show that every valid pair is supported by at least one valid triple. Then, we will show that each triple in (ℓ/2) and (ℓ/2) supports at most one pair, and each triple in (ℓ/2) supports at most two pairs. Together, these statements imply that there can be at most |(ℓ/2)| + |(ℓ/2)| + 2|(ℓ/2)| valid pairs.Let (i, c) be a valid pair. Then there must exist some a ≡ i 2^ℓ/2 - 1 so that c ≤_2 a. Let c_1 = c 2^ℓ/2 and let c_0 = (c - c_1)/2^ℓ/2. Then c = c_0 · 2^ℓ/2 + c_1. Similarly, let a_1 = a 2^ℓ/2 and a_0 = (a - a_1)/2^ℓ/2. Notice that the binary representations of c_0, c_1, a_0, and a_1 are all ℓ/2 bits long, and moreover a_0 ≥_2 c_0 and a_1 ≥_2 c_1. We can form a valid triple (y, c_0, c_1) where y = a_0 + a_1 2^ℓ/2, with (a_0, a_1) as a witness. Moreover, we know that a = a_0 · 2^ℓ/2 + a_1 ≡ a_0 + a_1 2^ℓ/2 - 1. But a ≡ i 2^ℓ/2 - 1, so a_0 + a_1 ≡ i 2^ℓ/2 - 1. Thus we have constructed a valid triple which supports (i, c).Now, we would like to show that a triple (y, c_0, c_1) ∈(ℓ/2) can support at most one pair. Let (i, c) and (i', c') be two pairs supported by (y, c_0, c_1); we will prove that these two pairs must in fact be equal. We immediately know that c = c_0 · 2^ℓ/2 + c_1 = c', but we still must show i = i'. Let (a_0, a_1) be the witness to (y, c_0, c_1) so that a_0 + a_1 ≡ i 2^ℓ/2 - 1, and let (a_0', a_1') be the witness so that a_0' + a_1' ≡ i' 2^ℓ/2 - 1. Both such witnesses must exist because (y, c_0, c_1) supports (i, c) and (i', c'). Moreover, a_0 + a_1 ≡ y ≡ a_0' + a_1' 2^ℓ/2, and because (y, c_0, c_1) ∈(ℓ/2), we know that a_0 + a_1 ≤ 2^r - 1 and a_0' + a_1' ≤ 2^r - 1. Then a_0 + a_1 = a_0' + a_1', so i = i', as desired.Similarly, let (y, c_0, c_1) ∈(ℓ/2), and let (i, c), (i', c'), a_0, a_1, and a_0', a_1' as before. Once again, we know that c = c' but must show that i = i'. As a_0, a_1, a_0', a_1' < 2^ℓ/2 and (y, c_0, c_1) ∈(ℓ/2), so 2^ℓ/2 - 1 < a_0 + a_1 < 2^ℓ/2 + 1 and similarly for a_0' + a_1'. Then a_0 + a_1 ≡ y ≡ a_0' + a_1' 2^ℓ/2 implies that a_0 + a_1 = a_0' + a_1', so indeed i = i'.Finally, we would like to show that a triple (y, c_0, c_1) ∈(ℓ/2) can support at most two pairs. Now, given a witness (a_0, a_1) to the validity of (y, c_0, c_1), it is possible that a_0 + a_1 > 2^ℓ/2 - 1 and also possible that a_0 + a_1 ≤ 2^ℓ/2 - 1. However, once we know which case a particular witness falls into, we know from above that supported pair associated with that witness is fully determined. Because there are only two cases, a triple in (ℓ/2) can support at most two pairs.We have shown that at most |(ℓ/2)| + |(ℓ/2)| + 2|(ℓ/2)| valid pairs are supported by valid triples, and also that every valid pair must be supported by a valid triple. Thus there can be at most |(ℓ/2)| + |(ℓ/2)| + 2|(ℓ/2)| valid pairs. Finally, Claim <ref>, along with (<ref>), implies that the total number of valid pairs is bounded by[ 1 1 2 ]·[ 3 0 0; 3 6 4; 1 1 4 ]^ℓ/2·[ 1; 0; 0 ] =: [ 1 1 2 ]· M^ℓ/2·[ 1; 0; 0 ].The matrix M is diagonalizable and so we can compute P and Λ so that M = PΛ P^-1, which yields M^ℓ/2 = P Λ^ℓ/2 P^-1. We computeM^ℓ/2 =[ -100;1 1 + √(5) 1 - √(5);011 ]·[3^ℓ/200;0 (5 + √(5))^ℓ/20;00 (5 - √(5))^ℓ/2 ]·[-1 0 0; 1/2√(5) 1/2√(5) 5 - √(5)/10;-1/2√(5)-1/2√(5) 5 + √(5)/10 ]This immediately bounds the number of valid pairs by O( 5 + √(5))^ℓ/2, since that is the largest eigenvalue of M in the decomposition above.By the discussion about, this shows thate( √(q) - 1, q - 1) = O( 5 + √(5))^ℓ/2,as desired.This establishes Theorem <ref>, except for the proof of Claim <ref>, which we prove below.Consider a valid triple (y',c'_0, c'_1) of length r + 1, with witness a_0', a_1'. Let (y,c_0,c_1) be the triple of length r that is obtained by dropping the first (most significant) bit from each of y,c_0,c_1, and consider a_0,a_1 formed from a_0', a_1' in the same way.That is, to obtain x ∈ [2^r] from x' ∈ [2^r+1], we let x = x' 2^r. We observe that (y,c_0,c_1) is in fact a valid triple of length r, and that a_0, a_1 is a witness for this validity. Indeed, first notice that since we are just dropping bits, the conditions about 2-shadows are preserved.Next, we have a_0' + a_1'≡ y' 2^r+1 a_0' + a_1'≡ y 2^rsince 2^r | 2^r+1 a_0 + a_1≡ y 2^rusing the definition of a_0,a_1,y. With this connection in mind, we go the other way.Given a valid triple (y,c_0,c_1) of length r, what are the valid triples (y', c_0', c_1') of length r+1 that extend it?[Here we say that x' ∈ [2^r+1] extends x ∈ [2^r] if x' = b 2^r + x for some b ∈{0,1}; that is, the binary expansion of x' is an extension of the binary expansion of x.]The reasoning above establishes that any witness a_0', a_1' for (y', c_0', c_1') must extend a witness a_0,a_1 of (y,c_0,c_1). We wish to understand the number and nature of the extensions of (y,c_0,c_1). Suppose that c_0' = b_0 2^r + c_0 and c_1' = b_1 2^r + c_1, for b_0,b_1 ∈{0,1}. We illustrate the set-up in Figure <ref>. How (y', c_0', c_1') behaves depends on which of the classes (y,c_0,c_1) was in to begin with. To begin to analyze this, suppose that (y,c_0,c_1) ∈(r).Thus, the only witnesses a_0,a_1 for (y,c_0,c_1) must have a_0 + a_1 < 2^r.In Figure <ref>, the carry bit x is equal to 0. Now we ask: how many ways are there to extend the triple (y,c_0,c_1), and what classes do these extensions live in? There are eight choices for this extension, corresponding to the choices of b_0,b_1, d. For each of these 8 choices, we ask whether or not the resulting extension is a valid triple of length r+1, and if so, does it live in (r+1), (r+1), or (r+1)? In Figure <ref>, this corresponds to asking whether or not we can find ways to fill in the boxes labeled ≥_2 b_0 and ≥_2 b_1 so that all the constraints in the figure are met.If the only way we can fill these in results in x' = 0, then this generates an element of (r+1).If the only way to fill these in results in x' = 1, then this generates an element of (r+1).If there are ways to result in both x'=0 and x'=1, then this generates an element of (r+1).And if there is no way to fill in the boxes, then no new valid triple is generated. We count these below.With the machinery above, this is a straightforward but tedious exercise.We work out tbe logic for the case that (y,c_0,c_1) ∈(r) as examples.The results of all of these calculations are tabulated in Table <ref>. Suppose that (y,c_0,c_1) ∈(r), and we try to extend it to (y', c_0', c_1'). From the definition of (r), the only witnesses a_0,a_1 for (y,c_0,c_1) must have a_0 + a_1 < 2^r. We consider each of the eight possibilities in turn below. First, suppose that d=0; we consider the four possibilities for b_0,b_1. b_0 = b_1 = 0.In this case, there are two ways to choose extensions to the witness a_0,a_1 that will result in a witness a_0',a_1' for (y',c_0',c_1').We may either choose a_0' = 2^r + a_0 and a_1' = 2^r + a_1, or we may choose a_0' = a_0 and a_1' = a_1.Thus, there is a witness for (y',c_0', c_1') so that a_0' + a_1' ≥ 2^r+1 and there is a witness so that a_0' + a_1' < 2^r+1.Thus, for this choice of d,b_0,b_1, we have (y', c_0', c_1') ∈(r+1). b_0 = 1, b_1 = 0.In this case, we must choose a_0' = 2^r + a_0 in order to satisfy a_0' ≥_2 c_0'.We also must choose a_1' = 2^r + a_1 so that a_0' + a_1' = (2^r + a_0) + (2^r + a_1) = 2^r+1 + (a_0 + a_1) has a zero in the r'th bit (to match i').Here, we are using that a_1 + a_0 < 2^r.Thus, there is only one choice for a witness a_0',a_1', and we have a_0' + a_1' ≥ 2^r+1.Thus, for this choice of d,b_0,b_1, we have (y', c_0', c_1') ∈(r+1). b_1 = 0, b_0 = 1. This is similar to the previous case, and results in a different (y', c_0', c_1') ∈(r+1). b_0 = b_1 = 1. The requirement that a_0' ≥_2 c_0' and a_1' ≥_2 c_1' implies that we must choose a_0' = 2^r + a_0 and a_1' = 2^r + a_1. Thus, this results in yet another (y', c_0', c_1') ∈(r+1). Next, suppose that d=1; again consider four possibilities for b_0,b_1. b_0 = b_1 = 0.In this case, there are again two ways to choose extensions to the witness a_0,a_1 that will result in a witness a_0',a_1' for (y',c_0',c_1').We may either choose a_0' = 2^r + a_0 and a_1' =a_1, or we may choose a_0' = a_0 and a_1' = 2^r + a_1.In both cases, we have a_0' + a_1' < 2^r+1, and so (y', c_0', c_1') ∈(r+1). b_0 = 1, b_1 = 0.In this case, we must choose a_0' = 2^r + a_0 in order to satisfy a_0' ≥_2 c_0'.We also must choose a_1' = a_1 so that a_0' + a_1' = (2^r + a_0) +a_1 = 2^r + (a_0 + a_1) has a one in the r'th bit (to match i').As above, we are using that a_1 + a_0 < 2^r.Thus, there is only one choice for a witness a_0',a_1', and we have a_0' + a_1' < 2^r+1.Thus, for this choice of d,b_0,b_1, we have (y', c_0', c_1') ∈(r+1). b_1 = 0, b_0 = 1. This is similar to the previous case, and results in a different (y', c_0', c_1') ∈(r+1). b_0 = b_1 = 1. The requirement that a_0' ≥_2 c_0' and a_1' ≥_2 c_1' implies that we must choose a_0' = 2^r + a_0 and a_1' = 2^r + a_1. In this case, we have reached a contradiction, because a_1' + a_0' = 2^r+1 + (a_0 + a_1) has a zero in the r'th position, while y' = d2^r + y was supposed to have a 1.Thus, this case does not contribute to any of the sets. We may continue in this way for the case that (y,c_0,c_1) ∈(r) or (r).We omit the details and report the results in Table <ref>. Finally, we add all the cases up.We see that the only contributions to (r+1) come from (r), and there are three of them, so |(r + 1)| = 3 ·|(r)|.There are many ways to obtain contributions to (r+1): three from (r), six from (r), and four from (r).Thus,|(r+1)| = 3 · |(r)| + 6 · |(r)| + 4 · |(r)|.Finally, for contributions to (r+1), we have one from (r), one from (r), and four from (r).Thus,|(r+1)| = |(r)| + |(r)| + 4·|(r)|.This completes the proof of Claim <ref>.The completion of the proof of Claim <ref> establishes Theorem <ref>.§.§ Other possible settings of s and tAbove, we have analyzed the case where s = √(q)-1 and t = q-1.There are several other settings which appear to work just as well.For example, we may take s = √(q) + 1 and t = q-1.Empirically this setting seems to work well, and we suspect that the analysis would follow similarly to the analysis in the previous section.Another option is to consider s = q-1 and t = √(q)-1.That is, we consider lines of the formℒ =(T, α T + β) α∈_q^*, β∈ G_t .It is not hard to see that every point (other than those with x=0) has t lines in ℒ passing through it, and moreover no lines in ℒ pass through (0,y) for any y.Thus, Observation <ref>, along with Theorem <ref>, implies that the resulting partially lifted code (punctured along the y-axis) has the t-DRGP with rate at least N - O(e(q-1,t)). It seems as though an analysis similar to that of Section <ref> applies here, and again gives codes with a similar result as in Theorem <ref>.A third option, whose exploration we leave as an open line of work, is taking neither s nor t to be q-1.Empirically, it seems like taking s = t = (q-1)/3 may be a good choice when 3 | q-1.However, while Theorem <ref> applies, getting a handle on e(s,t) in this case is not straightforward.Moreover, it is not immediately clear how many lines pass through every point. We would need to puncture the code carefully to ensure that we maintain the number of disjoint repair groups to apply Observation <ref>.The framework of partially lifted codes is quite general, and we hope that future work will extend or improve upon our techniques to fully exploit this generality. § CONCLUSIONWe have studied the s-DRGP for intermediate values of s.As s grows, the study of the s-DRGP interpolates between the study of LRCs and LCCs, and our hope is that by understanding intermediate s, we will improve our understanding on either end of this spectrum.Using a new construction that we term a “partially lifted code," we showed how to obtain codes of length N with the s-DRGP for s = Θ(N^1/4), that have dimension K ≥ N - N^.714.This is an improvement over previous results of N - N^3/4 in this parameter regime.We stress that the main point of interest of this result is not the exponent 0.714, which we do not believe is tight for Question <ref>; rather, we think that our results are interesting because (a) they show that one can in fact beat N - O()s√(N)) for s = N^1/4≪√(N), and (b) they highlight the class of partially lifted codes, which we hope will be of independent interest. § ACKNOWLEDGEMENTSWe thank Alex Vardy and Eitan Yaakobi for helpful exchanges.alpha
http://arxiv.org/abs/1704.08627v1
{ "authors": [ "S. Luna Frank-Fischer", "Venkatesan Guruswami", "Mary Wootters" ], "categories": [ "cs.IT", "math.IT" ], "primary_category": "cs.IT", "published": "20170427154839", "title": "Locality via Partially Lifted Codes" }
[email protected] Schulich Faculty of Chemistry, Technion-Israel Institute of Technology, Haifa 32000, Israel [email protected] Schulich Faculty of Chemistry, Technion-Israel Institute of Technology, Haifa 32000, Israel One of the causes of high fidelity of copying in biological systems is kinetic discrimination. In this mechanism larger dissipation and copying velocity result in improved copying accuracy. We consider a model of a polymerase which simultaneously copies a single stranded RNA and opens a single- to double-stranded junction serving as an obstacle. The presence of the obstacle slows down the motor, resulting in a change of its fidelity, which can be used to gain information about the motor and junction dynamics. We find that the motor's fidelity does not depend on details of the motor-junction interaction, such as whether the interaction is passive or active. Analysis of the copying fidelity can still be used as a tool for investigating the junction kinetics. Kinetic discrimination of a polymerase in the presence of obstacles Saar Rahav December 30, 2023 ===================================================================§ INTRODUCTION Being alive means being out of thermal equilibrium. Our cells grow and divide via a host of nonequilibrium processes in which complex molecules are synthesized, transported and degraded. Many of these processes are carried out by biomolecules which act like motors or machines, and are driven by chemical potential differences. One of the most delicate and demanding tasks handled by such biological motors is the replication and distribution of genetic information. Polymerase type enzymes, which generate copies of nucleic acid polymers, are a well-known class of information handling molecular machines. Maintaining high fidelity of copying by these enzymes is often crucial, as a large number of copying errors may result in malfunctions and death. Since “information is physical”, as beautifully stated by Landauer <cit.>, the copying should be studied as a thermodynamic process.Close to thermal equilibrium, the accuracy of the copying process is determined by the difference in binding free-energy between correct and incorrect monomers. This mechanism of error reduction has been termed energetic discrimination. But binding free energies of various nucleic acids have a limited range, as they all need to stay bound, yet not be too difficult to remove. This means that energetic discrimination schemes often show only moderate level of accuracy, much lower than what is typically observed in biological systems. The natural conclusion is that accurate copying of information requires out-of-equilibrium processes, as pointed out by Hopfield <cit.> and Ninio <cit.>. When a system is driven away from equilibrium, copying fidelity can be enhanced at the cost of additional dissipation. A theory of the fidelity of transcription will inevitably investigate such processes from a thermodynamical perspective.Several recent papers were devoted to this question <cit.>. Sartori and Pigolotti <cit.> discussed the difference between the energetic discrimination scheme that was mentioned above and kinetic discrimination. The latter is controlled by the rates of incorporation of correct and incorrect monomers, or equivalently by activation energies, rather than by differences in binding free-energies. They pointed out that typically one of these discrimination schemes dominates the other, so that the subdominant mechanism has little effect on the copying fidelity. Kinetic discrimination is typically dominant when the process is far from equilibrium.The thermodynamics of polymerization processes were studied meticulously by Andrieux and Gaspard in a series of papers <cit.>. For copolymerization on a template with kinetic discrimination, their model showed trade-off between accuracy and dissipation. Processes driven by larger thermodynamic affinities resulted in faster transcription rates, accompanied by lower error rates.Such results suggest a connection between the velocity of the polymerizing agent and its accuracy. However, the models studied so far in the context of transcription or copolymerization were allowed to propagate freely on their template. In biological systems, the template – a single strand of DNA/RNA – may be blocked by a second strand or a hairpin, which must be removed before copying can proceed. Typically these obstacles are handled by helicases or other specialized molecular machines. Nevertheless, there are known examples of polymerases, such as the T7 RNA polymerase <cit.> and HIV-1 reverse transcriptase <cit.>,that remove obstacles on their own, simultaneously transcribing and opening the double stranded structure. When reaching an obstacle these polymerases must slow down. How does such an interaction with an obstacle affect the fidelity of transcription? Naively, one would expect that the presence of an obstacle would slow down such a motor and, accordingly, lead to more copying errors.Betterton and Jülicher studied a simple model of a helicase encountering such an obstacle <cit.>. They qualitatively characterized the interaction between motor and obstacle as being either active or passive. In passive interaction, the motor must wait until the nearest bond in the double-stranded obstacle opens due to a thermal fluctuation, thereby allowing the motor to step forward and prevent the bond from closing. In the active interaction the motor partially enters the obstacle, and the elastic interaction between the motor and the single- to double-strand junction increases the likelihood of bond opening. Active interactions were found to result in higher rates of bond breaking and larger velocities.In this paper we study how the fidelity of a polymerase is affected when it encounters an obstacle. A simple model which describes both polymerization on a template and interaction with a junction is developed. It combines elements from the models presented by Andrieux and Gaspard <cit.> and by Betterton and Jülicher <cit.>. The model is studied numerically with the help of simulations, and analytically, using a steady-growth ansatz. In particular, differences between copying fidelity of active and passive interactions are investigated.The structure of the paper is as follows: in section <ref> we present a simple model of a polymerase working against an obstacle. We discuss the possible processes and their transition rates. In section <ref> we present the master equation that describes the dynamics of the model. We also present a steady-growth ansatz that allows to obtain analytical expressions for observables such as the polymerase mean velocity and copying fidelity. In section <ref> we investigate the case of passive interaction between the polymerase and the obstacle, while section <ref> is dedicated to systems with active interactions. In both sections the predictions of the steady-growth ansatz are compared to Monte-Carlo simulations. We find that while the form of the elastic interaction affects the mean copying rate, it has no effect on the copying fidelity. We discuss the implications of this result in section <ref>.§ A MARKOVIAN MODEL OF A POLYMERASE PUSHING AGAINST AN OBSTACLEIn this section we present a simple Markovian model of a polymerase. The model is heuristically depicted in Fig. <ref>. The system is composed of a substrate nucleic acid polymer (DNA or RNA) with a knownsequence of monomers.The active site of the polymerase is located at the l_th base of the substrate. The polymerase progresses along the chain while adding monomers to the complementary strand, leaving behind a double-stranded structure. The obstacle we study is a junction with another double-stranded structure. These are known to occur as part of a secondary structure of the substrate, such as a hairpin. In the model, the junction is located at site (l+j). The discreteness of base pairs naturally calls for a model with discrete steps, which are characterized by transition rates.We consider four different processes:* Forward motion of the polymerase from site l to site l+1, accompanied by addition of a monomer m_l+1 to the complementary strand. The rate of this process is denoted by R^+_m_l+1. * Backward motion of the polymerase from site l to site l-1, while removing monomer m_l from the complementary strand. The rate is denoted by R^-_m_l. * Opening of a bond in the junction, with rate Q^+. In this process the junction moves forward along the template. * Closing of a bond in the junction, with rate Q^-, moving the junction backwards. An important distinction between the addition and removal of monomers is that the polymerase can always add any of the monomers in the solution, but can remove only the specific monomer which is at the terminal position on the complementary strand. The kinetic equations that we will use in the next section will take this into account.The polymerase and junction will interact with each other if they are close enough. Physically, this is an elastic interaction that is caused by deformation of the junction when the polymerase pushes into it. Such an interaction will partially destabilize the outer bond in the junction while pushing the polymerase backwards. We assume that the interaction changes the transition rates according toR^+_m(j)=r^+_m Θ ^-(j),R^-_m(j)=r^-_m Θ ^+(j),Q^+(j)=q^+ Θ ^+(j) andQ^-(j)=q^- Θ ^-(j). r ^± _m and q^± are the rates for the non-interacting (or distant) polymerase and junction. Θ^± (j) describe the polymerase-junction interaction. A more detailed description of the dependence of these rates on system properties is given in the rest of this section.We wish to study the fidelity of copying and how it is affected by the interaction with the junction. Two qualitatively different mechanisms can be employed to improve fidelity of copying. Energetic discrimination favors the correct monomer based on lower free energy of binding. In contrast, kinetic discrimination originates from different kinetic rates for binding of monomers. As pointed out by Sartori and Pigolotti <cit.>, the two mechanisms compete with each other, since energetic discrimination works near equlibrium, while its kinetic counterpart reaches best fidelity far from equilibrium.Here we consider a model with pure kinetic discrimination, following a similar choice by Bennett <cit.> and by Andrieux and Gaspard <cit.>.We make several simplifying assumptions about the structure of the model. These assumptions allow to reduce the bookkeeping involved in defining the system's state, while keeping the main qualitative features of the dynamics. Specifically, we assume that the substrate strand is built out of two types of monomers and the solution has the two complementary monomers in equalconcentrations and in a spatially homogeneous mixture. With these assumptions, the transition rates only discriminate between addition of a correct or an incorrect monomer to the complementary strand, where the correctness is determined by comparing the monomer to its partner on the substrate. Crucially, there is no need to specify the composition of the substrate.The absence of energetic discrimination means thatr^+_m/r^-_m = [mNTP] r̂^+_m/[PPi] r̂^-_m=[mNTP]/[PPi]ϵ ,for any value of m, where ϵ = exp[ -Δ G^0 ], Δ G^0 is the standard free energy of the polymerization reaction in units of k_B T,[mNTP] is the concentrations of nucleotide m and [PPi] is the concentration of pyrophosphate, a byproduct of the polymerization reaction.r̂ ^± _m denotes the transition rates to addition and removal of monomers at standard concentrations. We assume that concentrations do not vary in time. This is a good approximation for many in vitro experiments. Kinetic discrimination can be expressed through higher rates of addition and removal of the correct monomer (c), compared to the wrong monomer (w), namelyr̂^+_c/r̂^+_w =r̂^-_c/r̂^-_w = d,where d parametrizes the preference for inserting correct monomers, and therefore the resulting fidelity of copying. In living cells d ≈ 10^4-10^6 <cit.>, but we will consider smaller values to avoid problems associated with insufficient sampling of errors in our simulations. Kinetic discrimination reduces errors because when the polymer grows rapidly, a larger proportion of the incorporated monomers is of the correct type. These monomers are left behind in the copied strand when the polymerase propagates. They are bound from both sides, and are unlikely to detach unless the motor performs multiple backward steps to return to their location in the chain.Based on these considerations, the transition rates for pure kinetic discrimination will take the following formr^-_w=r̂^-_w[PPi], r^-_c=r̂^-_c[PPi]=r̂^-_w d [PPi], r^+_w=r̂^+_w[wNTP]=r̂^-_w ϵ [wNTP], r^+_c=r̂^+_c[cNTP]=r̂^-_w d ϵ [cNTP], The opening and closing of the bonds in the junction are ruled by thermal fluctuations. The ratio of rates has been determined empirically to be q^-/q^+≈ 7<cit.>. This expresses the fact that in equilibrium the two strands of the junction tend to bind to create a double-stranded structure. The motor must be driven against the junction to reverse this trend.Finally, the elastic interaction effect on the rates enters through the factors Θ ^ ±. Thermodynamic consistency mandates thatΘ ^+ (j)/Θ ^- (j+1) = e ^[ U(j) - U(j+1) ] ,where U(j) is the potential of the interaction at distance j in units of k_BT. We will assume that the influence of this elastic interaction affects both rates according toΘ ^+(j) = e^(g-1)[U(j+1)-U(j)] , Θ ^-(j+1) = e^g[U(j+1)-U(j)] .Here, g is a load-distribution-factor like parameter. Note that the interaction affects both processes that close the distance between the motor and the junction in the same way, and the same is true for opening. It does not matter if the process involves motion of the motor accompanied by a polymerization reaction, or motion of the junction due to the formation of a new inter-strand bond. The same form of interaction was also used by Betterton and Jülicher in their model of helicase <cit.>.§ THE MASTER EQUATION AND THE STEADY-GROWTH ANSATZConsider a system in which the polymerase was placed on the substrate strand and then left to evolve under conditions of mean growth. The state of the system is characterized by: i) the position of the motor l on the substrate, which is assumed to be initially at l=0. l is therefore also the length of the complementary strand being polymerized; ii) the composition of the complementary strand compared to the substrate strand: m_1, m_2,...m_l, where m_k=c or w; and iii) the distance of the motor from the junction j.Given the processes and rates described in Sec. <ref>, the probability distribution of the system evolves according to the master equationdP/dt(m_1...m_l,j,l,t) =R^+_m_l (j+1) P(m_1...m_l-1,l-1,j+1,t) + ∑ _m_l+1 R^-_m_l+1 (j-1) P(m_1...m_l+1,l+1,j-1,t)+ Q^+(j-1) P(m_1...m_l,l,j-1,t) + Q^-(j+1) P(m_1...m_l,l,j+1,t) -[Q^+(j)+Q^-(j)+R^-_m_l(j) + ∑ _m_l+1 R^+ _m_l+1(j) ]P(m_1...m_l,l,j,t). The system evolution can be studied in full detail by solving the master equation with an appropriately chosen initial condition, or alternatively by simulating the underlying jump process. However, both approaches are needlessly complicated if one is interested in simple quantities such as the mean error rate and velocity.A simpler approach for the description of these observables, which may even allow for an analytical solution, was developed by Andrieux and Gaspard <cit.>. The approach is based on the assumption that after a transient, the system reaches a steady-growth regime in which correlations between the length of the chain l, the composition of the chain and the distance j are lost. In this steady-growth regime the probability distribution can be approximated byP(m_1...m_l, l, j, t) ≃ P_t(l)μ (m_1...m_l) Φ(j),where P_t(l) is the probability of length l, μ (m_1...m_l) is the probability of a given sequence when the length is set, and Φ(j) is the probability of distance j. Only the length distribution is explicitly time-dependent.This ansatz can not be exact, since the distribution μ (m_1...m_l) must include the memory of transient behavior in the distant past. Nevertheless, it is a useful approximation in the steady-growth regime, as long as one focuses on marginal distributions such as μ(m_l), μ(m_l-1,m_l) etc., describing the probability distribution of one or a few monomers near the tip of the growing strand. In the steady-growth regime one expects these distributions to be independent of time and the precise value ofl. Using similar physical intuition, one expects the probability Φ (j) to reach a time-independent steady state in the steady-growth regime.The equations describing these simpler marginal probabilities are derived by substituting the ansatz Eq. (<ref>) into the master equation and summing over all unwanted variables.Summation over the chain composition and over values of j gives an equation for the length distributiondP_t(l)/dt = ∑ _m_lr^+_m_l⟨Θ ^- ⟩ P_t(l-1) + ∑ _m_l+1r^-_m_l+1⟨Θ ^+ ⟩μ (m_l+1) P_t(l+1) - [∑ _m_l+1r^+_m_l+1⟨Θ ^- ⟩ + ∑ _m_lr^-_m_l⟨Θ ^+ ⟩μ (m_l) ] P_t(l).Here ⟨Θ ^±⟩ = ∑ _j Θ ^± (j) Φ (j), andμ (m_l) is the likelihood that the last monomer in the chain is m_l.We will see shortly that for our model there are no correlations in the composition of the chain and so μ (m_l) can have the two values μ (w) and μ(c)=1-μ(w). μ(w) is also the probability of copying error in the bulk of the copied strand.Inspection of Eq. (<ref>) reveals that it includes processes in which the chain grows and shrinks. The mean growth velocity is the difference between the mean rate of polymerization and depolymerization, namelyv = ∑ _m r^+_m ⟨Θ ^- ⟩- ∑ _m r^-_m μ(m) ⟨Θ ^+ ⟩ . Summation over the chain composition and the lengths leads to an equation for the distribution of distances between the motor and junction.A short calculation gives(∑ _m r^+ _m + q^- ) Θ ^- (j+1) Φ (j+1) + ( ⟨ r^- ⟩ + q^+) Θ ^+ (j-1) Φ (j-1) -[ (∑ _m r^+ _m +q^- ) Θ ^- (j) + ( ⟨ r^- ⟩ + q^+ ) Θ ^+ (j)]Φ (j)=0as the equation determining the distribution of Φ (j). Here ⟨ r^- ⟩ = ∑ _m r^- _m μ (m) is the mean rate of removal of the last monomer.To obtain equations for the composition of the monomers in the complementary strand, one sums over all values of l, j and the possible composition of the first k monomers m_1 ... m_k. This results in a hierarchy of equations for the probability distribution of the last few monomers. The first two equations in this hierarchy are given by r^+ _m_l⟨Θ ^- ⟩ + ∑ _m_l+1 r^-_m_l+1⟨Θ^+ ⟩μ (m_l m_l+1) - r^- _m_l⟨Θ ^+ ⟩μ (m_l) - ∑ _m_l+1 r^+ _m_l+1⟨Θ ^- ⟩μ (m_l) =0 , r^+ _m_l⟨Θ ^- ⟩μ(m_l-1) + ∑ _m_l+1 r^- _m_l+1⟨Θ^+ ⟩μ (m_l-1 m_l m_l+1) - r^- _m_l⟨Θ ^+ ⟩μ (m_l-1m_l) - ∑ _m_l+1 r^+ _m_l+1⟨Θ ^- ⟩μ (m_l-1 m_l )=0 . This hierarchy has a solution in which there are no correlations between consecutive monomers.One can show that under the assumption of correlations only between nearest neighbors, which allows closing the hierarchy using Eqs. (<ref>) and (<ref>), the conditional probability μ (m_l-1|m_l) tends to μ(m_l). More importantly, in Secs. <ref> and <ref> we compare the results of the ansatz to simulations which do not assume lack of correlations. Excellent agreement is found. Both facts strongly suggest that the uncorrelated solution is stable to small perturbations.Since there are no correlations in the steady-growth regime, μ (m_l) describes also the probability to find a monomer anywhere on the chain , and μ(m_1m_2...m_l)= μ (m_1) μ(m_2)...μ (m_l). We note in passing that models with rates leading to nearest-neighbor correlations were studied by Andrieux and Gaspard <cit.>. In absence of correlations one obtains the following set of equations for monomer probabilitiesr^+ _m_l⟨Θ ^- ⟩ + ∑ _m_l+1 r^-_m_l+1⟨Θ ^+ ⟩μ (m_l) μ (m_l+1) - [ r^-_m_l⟨Θ ^+ ⟩ + ∑ _m_l+1 r^+ _m_l+1⟨Θ ^- ⟩]μ (m_l) = 0 .For the model studied here, the monomers in the complementary strand can either match the substrate, m=c, or be a copying error, m=w. By substituting μ (c) = 1 - μ (w) one can reduce Eq. (<ref>) to a single equation for the copying fidelity⟨Θ ^+ ⟩ (r^- _w - r ^- _c) μ ^2 (w) - [(r^- _w - r ^- _c)⟨Θ ^+ ⟩ + (r^+ _w + r ^+ _c)⟨Θ ^- ⟩] μ (w) + r^+ _w ⟨Θ ^- ⟩ = 0. Equations (<ref>), (<ref>) and (<ref>) are a set of coupled equations that characterize the properties of the polymerase in the steady-growth regime. Their simple form allows one to calculate quantities such as the motor's mean velocity and its fidelity analytically. The simple form of these equations ultimately emerges from the simple dependence of the transition rates on the chain composition and the motor-junction distance. In the next two sectionswe will solve these equations explicitly for two cases. We will also compare their predictions to those of a stochastic simulation which does not assume a steady-growth regime.§ PASSIVE UNWINDINGFollowing Betterton and Jülicher <cit.> we qualitatively characterize the interaction between the polymerase and the junction as either passive or active. In the passive case, the interaction between motor and junction is that of a hard wall. As a result, the motor does not enter the junction, and equivalently, a transition that closes the junction on the motor is impossible.Active unwinding, which will be studied in the next section, allows the motor to enter the junction. As will be seen later, this modifies the transition rates in a way which can result in faster unwinding.The hard wall interaction means that Φ (j) = 0 forj ≤ 0. In addition, the transition from j=1 to j=0 is forbidden, so Θ ^- (1) = 0. When the motor is away from the junction, its interaction with the wall can be neglected, and Θ ^- (j) = 1 for j > 1, while Θ ^+ (j) =1 for all values of j. This interaction is termed passive since unwinding happens when a bond in the junction opens due to a purely thermal fluctuation and the motor steps into the newly available space, thereby preventing the bond from closing. When this rectification process is more likely than its reversed process, the double-stranded DNA/RNA junction will be unwound on average. An inspection of Eq. (<ref>), shows that Φ (j) must in fact satisfy a detailed balance condition[r^+ _w + r^+ _c + q^- ] Θ ^- (j+1) Φ (j+1) = [ q^+ + ⟨r^- ⟩] Θ ^+ (j) Φ (j) .The underlying reason for the appearance of this detailed balance condition is the one-dimensional structure of the states and transition topology in the variable j, which precludes non-trivial closed cycles of transitions.Substitution of Θ ^± (j) = 1 for j>1, and of Θ ^+ (1) = 1, Θ ^- (1) = 0, leads toΦ (j+1) = ρΦ (j),with ρ = ⟨ r^- ⟩ + q^+ /r^+ _w + r^+ _c + q^-. We are interested in systems in which the driving force is sufficient for polymerization in absence of a junction, while the junction tends to close in absence of a polymerase. Under such conditions ρ < 1. This allows us to explicitly solve for Φ (j) as a function of the mean fidelity of copying. We find that Φ (j) = (1-ρ)ρ^j-1 for j ≥ 1, and Φ (j)=0 otherwise. This local-equilibrium distribution Φ(j) allows us to calculate the averages⟨Θ ^+ ⟩ = ∑ _j Θ ^+ (j) Φ (j) = 1, ⟨Θ ^- ⟩ = ∑ _j Θ ^- (j) Φ (j) = ∑ ^∞ _j=2Φ (j) = 1 - Φ (1) = ρ. Substituting Eqs.(<ref>) and (<ref>) in Eq. (<ref>) gives the following quadratic equation for the probability of making a copying errorμ ^2 (w) q^- (r ^- _c - r^- _w) + μ (w) [q^+(r^+ _w + r ^+ _c)- q^- (r ^- _c - r^- _w)+r^-_w r^+_c + r^-_c r^+ _w] - r^+ _w(r^-_c + q^+)= 0. Let us first examine the case of a fixed and immobile junction. In this case the polymerase will not be able to propagate at all, and kinetic discrimination is not possible. By substituting q^± =0 into Eq. (<ref>) we find that the copying fidelity isμ(w) = r^+_w r^-_c/r^+_cr^-_w+r^-_cr^+_w=1/2,which is the equilibrium error rate for our model due to the equal binding free-energies of m=w,c (cf. Eq. <ref>).For the general case of q^-,q^+0 we obtainμ(w) = -z + √(z^2 +4q^- (r ^- _c - r^- _w)r^+ _w(r^-_c+q^+))/2q^- (r ^- _c - r^- _w),with z=q^+(r^+ _w + r ^+ _c)- q^- (r ^- _c - r^- _w)+r^-_w r^+_c + r^-_c r^+ _w. The motor's mean velocity can be calculated by substituting the solutions for μ(m) and Θ (j) in Eq. (<ref>). A short calculation givesv = q^+ -q^- ρ = q^+ - q^- ⟨ r^- ⟩ +q^+/r^+_w+r^+_c+q^- .We see that the bond opening rate q^+ bounds the possible velocity of the polymerase. This bound is achieved in the limit of high concentrations, where r^+_w,r^+_c ≫⟨ r^- ⟩ , q^±. In this limit, the motor is very likely to reside near the junction and almost immediately step forward once a bond in the junction has opened, making the bond opening in the junction the rate-limiting process.The analytical calculation leading to Eqs. (<ref>) and (<ref>) for the copying fidelity and velocity is based on the steady-growth assumption. To test the validity of this assumption, we compared the resulting prediction of the theory to a stochastic simulation of the system. The simulation employed the Gillespie algorithm to determine the next step <cit.>. The copying fidelity was calculated by counting the fraction of wrong monomers along the chain, while ignoring an initial transient of length 100 base pairs. To gather enough statistics we repeated each simulation 3000 times. Fig. <ref>(a) depicts results for the mean velocity of polymerization as a function of monomer concentration. In all our simulations [cNTP]=[wNTP]=[C]. The concentration of PPi was taken to be 100 μ M. The kinetic discrimination parameter was chosen to be d=70. This value is much smaller than what one expects to find in biological systems. On the other hand, it results in enough copying errors to allow comparison of simulation and theory on a reasonable time scale. Lines correspond to the prediction of Eqs. (<ref>) and (<ref>), while symbols are the simulation results. It is clear that the agreement is excellent for all positive velocities. Different curves correspond to different values of the kinetic coefficient of the junction q^+, where we always keep q^- = 7q^+ <cit.>.The results clearly show that the polymerase's mean velocity increases with monomer concentration. For a freely propagating motor, the velocity is asymptotically linear in the concentrations, due to the linear dependence of r^+_w,c. This is no longer true when the junction is present, since the velocity approaches q^+ in this limit. For low concentrations of monomers, the mean velocity is negative and the complementary strand is degraded by the motor. This violates the assumptions made for the steady-growth regime, as the chain composition was mostly determined by the process used to prepare it, rather than by the polymerase dynamics. Andrieux and Gaspard <cit.> discussed the velocity in a depolymerization regime in detail, but such a discussion goes beyond the scope of the current work.The copying fidelity is depicted in Fig. <ref>(b). It increases with the concentration, as expected in a kinetic discrimination mechanism. The presence of the obstacle reduces the copying fidelity. At slow velocities, the copying error probabilities approach 1/2, as predicted. At large concentrations the error probability approaches the valueμ (w) ≃r^-_c + q^+/q^+(1 + d) +2r^-_c.It is easy to see that when q^+ ≫ r^-, the obstacle opens fast enough to allow almost free propagation of the motor, and the error rate goes to 1/1+d which is the error rate of a far-from-equilibrium freely propagating polymerase <cit.>. When q^+ ≪ r^-, the obstacle is essentially immobile, and the error rate approaches 1/2, as expected.§ ACTIVE UNWINDING Active unwinding occurs when the polymerase can push into the junction and drive the two strands apart. The elastic interaction between motor and junction weakens the bond between the strands while also applying a force that pushes the polymerase away from the junction. The precise form of the interaction is not known. We will therefore choose an interaction which exhibits all the expected qualitative features but is easy to use in calculations.Following Betterton and Jülicher <cit.>, we consider a step-like interaction potential U(j). The potential is schematically depicted in Fig. <ref>. This potential vanishes when the polymerase is away from the junction (j > 1). When the polymerase enters the junction (j=0), the potential obtains the value U_0 > 0, expressing a repulsive interaction between motor and junction. The polymerase can push itself further into the junction, where in every step the potential increases by an additional U_0. We assume that the polymerase can at most penetrate a finite number of steps into the junction. This is expressed by placing a hard wall interaction at j=-n.This potential enters the kinetic equation through the factors Θ ^± (j) given in Eq. (<ref>). For the step potential, these factors obtain a simple formΘ ^+(j)=Y^g ,j < 11 , j ≥ 1,andΘ ^-(j)=0 ,j ≤ -nY^g-1 ,-n < j ≤ 11 , j > 1.where Y ≡exp(U_0).The calculation that allowed for an explicit solution of Eq. (<ref>) for the passive case can also be applied for active interactions with this staircase potential. The solution of Eq. (<ref>) follows from the detailed balance condition (<ref>), but the different form of Θ ^± results in a somewhat different recursion relation for the probability distribution Φ(j). For j≥ 1, one still has Φ(j+1)= ρΦ(j) as in Sec. <ref>, but now for -n<j<1 we have Φ(j+1) = Φ(j) ρ Y, and Φ(j)=0 for j ≤ -n. A straight forward, but somewhat tedious, calculation givesΦ(j)= (1-ρ)(1-Yρ)/(Y ρ)^-n(1-ρ)+ρ(1-Y) , j=1 Φ(1) ρ ^ j-1 , j>1 Φ(1) (ρ Y)^j-1 , -n<j<1 0, j ≤ -n .This can be used to calculate⟨Θ ^+ ⟩= ∑ _j=1 ^∞Φ (j)+∑ _j=1-n ^0Φ (j)Y^g= 1-Y ρ +(1-ρ)[(Yρ)^-n-1]Y^g/ρ(1-Y)+(1-ρ)(Yρ)^-n,and⟨Θ ^- ⟩= ∑ _j=2 ^∞Φ (j)+∑ _j=2-n ^1Φ (j)Y^g-1= ρ(1-Yρ) +(1-ρ)[(Yρ)^-n+1-Yρ]Y^g-1/ρ(1-Y)+(1-ρ)(Yρ)^-n .The factors ⟨Θ ^+ ⟩ and ⟨Θ ^- ⟩ are subsequently used in the calculation of the polymerase velocity and fidelity.Interestingly, we note that ⟨Θ ^- ⟩/⟨Θ ^+ ⟩=ρ, exactly as in the passive case. Since the mean error rate depends only on this ratio, we find that the motor's fidelity is independent of the elastic interaction and is given be Eq. (<ref>). The motor's velocity does depend on the elastic interaction and is given byv=[ 1-Y ρ +(1-ρ)( (Yρ)^-n-1) Y^g] [ ⟨ r ^- ⟩ + ρ (r^+_w+r^+_c)] /ρ(1-Y)+(1-ρ)(Yρ)^-nThe analytical predictions of Eqs. (<ref>) and (<ref>) were compared to simulations in Fig. <ref>. Figure <ref>(a) shows comparison of the mean velocity of polymerization as a function of monomer concentration for various values of U_0.The U_0 →∞ results correspond to a passive motor with a hard wall interaction.The concentration of PPi and the kinetic discrimination are as in the passive unwinding. Lines, again, correspond to the prediction of Eq. (<ref>),while symbols are the simulation results. The kinetic coefficients of the junction were set to q^+=1,q^-=7 for all the results depicted in the figure. Excellent agreement is found between Eq. (<ref>) and the simulation results. The polymerization velocity clearly increases with increasing concentration, but its value depends on the interaction. The two active motors depicted in Fig. <ref>(a) are clearly faster than a passive motor with the same monomer concentration. A comparison of the two motors with n=3 shows that the mean velocity does depend also on the step height U_0. The fact that v depends on U_0 is precisely the effect found for helicases by Betterton and Jülicher <cit.>.The dependence of the copying fidelity on the monomer concentration is shown in Fig. <ref>(b). The steady-growth ansatz predicts an interaction independent result, given by Eq. (<ref>). We performed simulations for several models with different interactions between the polymerase and the junction, including a passive motor and two active models with different three-step potentials. All show the same fidelity as a function of concentration. This fidelity is still affected by the presence of the obstacle through the kinetic coefficients q^±, and is therefore different from that of a freely propagating polymerase, which is also depicted in the figure.§ DISCUSSION Kinetic discrimination is one of the mechanisms employed by DNA polymerases and similar biological motors to increase the fidelity of copying of genetic information. This mechanism exhibits a trade-off between dissipation and fidelity. In the context of copolymerization this trade-off was studied for instance by Andrieux and Gaspard <cit.>. As the system is driven further away from equilibrium, its mean velocity increases, and it attains a lower rate of copying errors.In helicases, the mean propagation velocity can be used to deduce thermodynamic features of the motor's interaction with an obstacle <cit.>. Experimentally, it is easier to count the copying errors in the copied strand than to follow the rate of copying.The existence of polymerases which simultaneously copy and open single- to double-stranded junctions offers an interesting alternative. Is it possible to extract information about polymerase and junction dynamics from the enzyme's fidelity? More specifically, can one deduce whether the polymerase-junction interaction is active or passive?The results presented in this paper help to clarify such questions. The simple model of a polymerase studied here slows down when it encounters a junction. This slowdown is indeed accompanied by an increased rate of copying errors, as can be seen from a comparison of the results of a freely propagating and non-freely propagating systems depicted in Figs. <ref>(b) and <ref>(b). This is exactly what one would anticipate in a model employing kinetic discrimination. However, our results show that the suggested correlation between the mean copying velocity and fidelity, where faster copying means better fidelity, is not always present. Upon encountering an obstacle, the model predicts an unexpected partial decoupling between mean velocity and error rate. The copying fidelity, as expressed by Eq. (<ref>), is independent of details of the polymerase-junction interaction. The presence of a junction still affects the probability of copying errors, but only through the kinetic part of the transition rates (given by q ^±). In contrast, the mean velocity clearly depends on all model parameters, including the interaction. In fact, we find that active unwinding can result in higher rate of copying than that of a passive polymerase, in agreement with the results of Betterton and Jülicher <cit.>. The three non-freely propagating models in Fig. <ref> have different velocities, due to the difference between active and passive interactions. At the same time they all exhibit the same fidelity. One still expects that when more parameters are varied, such as the monomer concentration or q^±, larger velocity will typically be accompanied by better fidelity, but it is important to point out that this is not always the case. One may wonder whether the independence of the fidelity of the interaction is a particular property of the model studied here. Maybe the result will break down for an interaction that is not described by a staircase potential? As explained below, the interaction dependence of fidelity found for the model is a result of the topology of the internal state space of the system. Specifically, it emerges from the fact that this state space is one-dimensional, and therefore does not include non-trivial closed loops of transitions. Models with different interactions would exhibit the same fidelity as long as they have a one-dimensional internal state space.In the steady-growth ansatz, the distribution to find the polymerase at different internal states, Φ(j), becomes time independent and furthermore satisfies Eq. (<ref>). This equation can be recast asI_j,j-1-I_j+1,j=0,whereI_j+1,j≡( <r^->+q^+) Θ^+ (j) Φ (j) - ( ∑_m r_m^+ + q^-) Θ^- (j+1) Φ (j+1)is the net flux of transitions from j to j+1. The steady state solution for Φ must therefore satisfyI_j+1,j =C, where C is a j-independent constant.For any reasonable model of a polymerase that unwinds a junction one expects that Φ (j) → 0 for j→∞, expressing the fact that the polymerase and junction tend to move closer to each other. One also expects that Φ (j) → 0 for j → -∞ since otherwise the model would allow the polymerase to simply pass through the junction without unwinding it. These considerations mean that the constant C must vanish, demonstrating that the detailed balance condition (<ref>) holds for quite general U(j). One should not take the appearance of the detailed balance condition as evidence that the system is in thermal equilibrium. The model exhibits steady growth with a nonvanishing rate of copying, and is therefore clearly out of equilibrium. Nevertheless, the internal state space does relax to some kind of local equilibrium.The detailed balance condition, Eq. (<ref>), is the underlying reason for the interaction independent fidelity found in Sec. <ref>. Indeed, summation of Eq. (<ref>) over j results in⟨Θ^-⟩/⟨Θ^+⟩ = ρ,for any U(j) that is consistent with distributions Φ (j) that vanish at infinity. Examination of Eq. (<ref>) shows that the mean error rate depends on ⟨Θ^-⟩ and ⟨Θ^+⟩ only through their ratio, ρ. The fact that this ratio is independent of the elastic interaction U means that the mean error rate is also independent of the form of U(j).The generation of this internal detailed balance, and the resulting independence of fidelity from the elastic interaction, are an interesting manifestation of the dynamics of copying machines. But how relevant is this phenomenon for a the few biological polymerases that remove obstacles on their own, such as reverse transcriptase?Can one deduce that their fidelity does not depend on the elastic interaction with a junction? Such a conclusion would be too hasty. One should be aware that the model we constructed oversimplifies several important aspects of the dynamics. One drastic assumption we made was to view the polymerase as a point particle. The polymerases in our cells are proteins of a finite size. They can be squeezed by the application of an external force.A flexible finite-sized polymerase can be modeled as an elastic spring. One can generalize the model studied here to include this aspect by considering a system with two internal degrees of freedom, namely the size of the polymer and the distance from the edge of the polymerase to the junction. One also should include two types of elastic interactions, a quadratic potential for the size of the polymerase, and a more general interaction between the polymerase and junction. The crucial point is that this expanded model has an internal state space which is no longer one-dimensional.As a result the probability distribution in this space decays to a nonequilibrium steady state in the steady growth regime. This is expected to lead to some degree of dependence of the fidelity on the elastic potential U. Further research is required to find out whether this effect can be large.Comparison between the model studied here and biological polymerases is further complicated by several additional factors. We assumed a model with purely kinetic discrimination, but for instance the fidelity of reverse transcriptase is a result of a mixture of kinetic and energetic discrimination.We have assumed equal, constant and uniform concentrations - a condition that is unlikely to hold in vivo. In addition, one expects the incorporation of monomers to depend on the identity of their neighbors, leading to correlations in the copied strand composition. All these elements must be included in the model before any biologically relevant predictions can be made.The approach taken here, namely studying a simplified version of the dynamics, should be viewed as a way of obtaining qualitative understanding of copying machines, focusing on the interplay between the velocity, fidelity, and interactions with an obstacle.It has the advantage of resulting in simple analytical expressions for observables,whose study can lead to qualitative insights. It is certainly worthwhile to include the additional aspects needed for a quantitative comparison with biological copying machines. But in our view there is much to gain by first studying models in which the roles of different mechanisms can be investigated separately. Before concluding, we would like to point out an interesting qualitative property of the dynamics.Our results show that the mean velocity and the fidelity contain non-overlapping information regarding the polymerase-junction interaction and kinetics. This is clearly indicated by the results depicted in Fig. <ref>. One should therefore strive to obtain data on both observables, and not be satisfied with measurements of only one of them. We expect this conclusion to be rather robust, and therefore hold also for biological polymerases.§ ACKNOWLEDGEMENTSWe thank Omri Malik and Ariel Kaplan for illuminating discussions that have initiated our interest in this topic.This work was supported by the U.S.-Israel Binational Science Foundation (Grant No. 2014405), by the Israel Science Foundation (Grant No. 1526/15), and by the Henri Gutwirth Fund for the Promotion of Research at the Technion.apsrev
http://arxiv.org/abs/1704.08945v1
{ "authors": [ "Ilana Bogod", "Saar Rahav" ], "categories": [ "q-bio.SC", "cond-mat.soft", "cond-mat.stat-mech", "physics.bio-ph" ], "primary_category": "q-bio.SC", "published": "20170426123204", "title": "Kinetic discrimination of a polymerase in the presence of obstacles" }
roman EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN)pdflatexCERN-EP-2017-052LHCb-PAPER-2017-005September 27, 2017Observation of charmless baryonicdecays ^0_ -0.1em ( -0.05em -0.03em)→ h^+ h^'- The LHCb collaboration[Authors are listed at the end of this article.] Decays of B^0 and B_s^0 mesons to the charmless baryonic final states p p h^+ h^'-, where h and h^' each denote a kaon or a pion, are searched for using the LHCb detector. The analysis is based on a sample of proton-proton collision data collected at center-of-mass energies of 7 and 8TeV,corresponding to an integrated luminosity of 3fb^-1. Four-body charmless baryonic decays are observed for the first time. The decays B^0_s→ p p K^+ K^-, B^0_s→ p p K^±π^∓,B^0→ p p K^±π^∓ andB^0→ p pπ^+π^-are observed with a significance greater than 5 standard deviations; evidence at 4.1 standard deviations is found for the B^0→ p p K^+ K^- decay and an upper limit is set on the branching fraction for B^0_s→ p pπ^+ π^-. Branching fractions in the kinematic region m(p p)<2850MeV/c^2 are measured relative to theB^0 → J/ψ (→ p p) K^*(892)^0channel. Published in Phys. Rev. D 96 (2017) 051103(R)  CERN on behalf of the collaboration, licence http://creativecommons.org/licenses/by/4.0/CC-BY-4.0.plain arabic In recent years, studies by the collaboration have greatly increased the knowledge of the decays of mesons to final states containing baryons. The first observation of a baryonic decay was reported in 2014 <cit.>, and recently reported the first observation of a baryonic decay <cit.>, the last of the four meson species for which a baryonic decay mode had yet to be observed.Primary areas of interest in baryonic B decays include the hierarchy of branching fractions to the various decay modes, the presence of resonances and the existence of a threshold enhancement in the baryon-antibaryon mass spectrum <cit.>. First evidence of violation in baryonic B decays has been reported from an analysis of decays <cit.>. It is of great interest to search for further manifestations of violation in baryonic decays, with so-called triple-product correlations (TPCs), see Ref. <cit.> and references therein. For certain decays asymmetries of up to 20% are predicted <cit.>. Four-body decays are particularly suited for this approach since the definition of the TPCsdo not involve the spins of the final-state particles, unlike TPCs in three-body decays <cit.>.This paper presents a search for the decays of and mesons to the four-body charmless baryonic final states h^+ h^'-, where h and h^' each denote a kaon or a pion. The inclusion of charge-conjugate processes is implied, unless otherwise indicated. For simplicity, the charges of the h^+ h^'- combinations will be omitted unless necessary.The branching fractions of these baryonic decays are measured relative to the B^0 → J/ψ (→ p p) K^*(892)^0 channel.So far only the resonant decayhas been seen by the  <cit.> and  <cit.> collaborations, which measured its branching fraction to be ℬ() = (1.24^+0.28_-0.25)× 10^-6 <cit.>.An upper limit ℬ(→) < 2.5 × 10^-4 at90% confidence levelhas been set by the collaboration <cit.>.The data sample analyzed corresponds to an integrated luminosity of 1of proton-proton collision data at a center-of-mass energy of 7and 2at 8. The LHCb detector <cit.> is a single-arm forward spectrometer covering the pseudorapidity range 2 < η < 5, designed for the study of particles containing b or c quarks. The detector elements that are particularly relevant to this analysis are as follows: a silicon-strip vertex detector surrounding the proton-proton interaction region that allows c and b hadrons to be identified from their characteristically long flight distance; a tracking system that provides a measurement of momentum, p, of charged particles; two ring-imaging Cherenkov detectors that are able to discriminate between different species of charged hadrons; and calorimeter and muon systems for the measurement of photons and neutral hadrons, and the detection of penetrating charged particles.Simulated data samples, produced with software describedin Refs. <cit.>,are used to evaluate the response of the detector and to investigate possible sources of background.Real-time event selection is performed by a trigger <cit.> that consists of a hardware stage,based on information from the calorimeter and muon systems, followed by a software stage, which performs a full event reconstruction. The hardware trigger stage requires events to have a muon with high transverse momentum, , or a hadron, photon or electron with high transverse energy in the calorimeters. Signal candidates may come from events where the hardware trigger was caused either by signal particles or by other particles in the event.The software trigger requires a two-, three- or four-track secondary vertex with a significant displacement from any primary proton-proton interaction vertices (PVs). At least one charged particle must have > 1.6 and be inconsistent with originating from a PV. A multivariate algorithm <cit.> is used for the identification of secondary vertices consistent with the decay of a hadron.The final selection of candidates, formed by combiningfour charged hadron candidates– a proton, an antiproton and an oppositely charged pair of light mesons – is carried out with a filtering stage followed byrequirements on the response of a boosted decision tree (BDT) classifier <cit.> and on particle identification (PID).The filtering stage includes requirements on the quality, , and of the tracks, loose PID requirements and an upper limit on theinvariant mass; the is defined as the difference between the vertex-fit of a PV reconstructed with and without the track in question. Each candidate must have a good-quality vertex that is displaced from the associated PV (that with which it forms the smallest ), must satisfy and requirements,and must have a reconstructed invariant mass close to that of a meson under the signal mass hypothesis. A requirement is also imposed on the angle θ_ dir between the candidate momentum vector and the line between the associated PV and the candidate decay vertex.There are 15 input quantities to the BDT classifier: , η, , θ_ dir and the flight distance of the candidate; the quality of the vertex fit; the and of the tracks; and the largest distance of closest approach between any pair oftracks. The BDT is trained using simulatedsignal candidates, generated with uniform distributions over phase space, and events in a high sideband of theinvariant mass in data (m() in the range 5450–5550)to represent the background. Tight PID requirements are applied to all final-state particles to reduce the combinatorial background, suppress the cross-feed backgrounds between the differentfinal states — background from other signal decays where one particle is misidentified — and ensure thatthe datasets for the threefinal states are mutually exclusive. For each final state individually, the requirements on the PID and BDT response are optimized for the signal significance using simulation samples for the signal. After all selection requirements are applied, approximately 3% of events with at least one candidate also contain a second candidate;a candidate is then selected at random. The efficiency of the full reconstruction and selection, including the acceptance and the trigger selection, is approximately 0.1%.To reject contributions from intermediate charm states, candidates withinvariant mass consistent with a meson orinvariant mass consistent with a baryon are removed. The contribution from the charmonium region is removed by requiring the invariant mass of thepair to be less than 2850, similar to the procedure in Refs. <cit.>. This last requirement is not applied to the normalization mode , where the vector mesons are reconstructed in theanddecay modes. All the other steps of the selection are in common for the signal and the normalization modes.The yields of the signal decays are obtained from a simultaneous unbinned extended maximum likelihoodfit to thecandidateinvariant mass distributions in the threefinal states in the range 5165–5525. This approach accounts for potential cross-feed from one channel to another due to particle misidentification. Each signal component is modeled with a double-sided Crystal Ball (DSCB) function <cit.>. For each signal the tail parameters of the DSCB functions are determined from simulation. The peak position of the signals is common to the three final states, while the difference between the peak positions of the and signals is constrained to its known value <cit.>. The width of the signal is a free parameter in thefinal state and it is related to the width in the other two final states by scale factors determined from simulation. The same applies to the width of the signals, which is a free parameter only in thefinal state.For each final state the dominantcross-feed background is included: themode in theandinvariant mass distributions, and themode in thespectrum. Each cross-feed background is modeled with a DSCB function with all the shape parameters fixed according to simulation;the yield is fixed relative to the yield in the correctly reconstructed final state taking into account the (mis)identification probabilities calibrated using data, as described below. In addition, a combinatorial background component modeled by an exponential function, with both parameters free to vary, is present for each final state.The yield of the normalization decay is determined from a separate simultaneous fit to the ,andinvariant mass distributions in the ranges 5180–5380, 3047–3147and 642–1092, respectively. Thecomponent is parameterized in theinvariant mass distribution by a relativistic spin-1 Breit-Wigner function and in theandinvariant mass distributions by DSCB functions with the tail parameters fixed from simulation. TheS-wave component is modeled in theinvariant mass distributionby the LASS parametrization <cit.> that describes nonresonant and K^*_0(1430)^0 S-wave contributions; this component is modeled in theandinvariant mass distributions with the same shape as thecomponent. A combinatorial background component modeled by a freely varying exponential function is also present in each spectrum.Theinvariant mass distributions with the results of the fit overlaid are shown in Fig. <ref> while the signal yields and the significances are collected in Table <ref>. The significance of each of the signal modes is determined from the change in likelihood when the corresponding yield is fixed to zero, with systematic uncertainties taken into account <cit.>. The ,andmodes are found to have significances of 6.5 standard deviations (σ), 4.1 σ and 2.6 σ, respectively, while the other signal modes have significances greater than 25 σ.The branching fractions of thedecays are determined relative to the visible branching fraction of thedecay using()/_ vis() = 𝒩^ 𝒸ℴ𝓇𝓇()/𝒩^ 𝒸ℴ𝓇𝓇()(×f_d/f_s) ,where f_s/f_d=0.259 ±0.015 (included only for the ) is the ratio ofhadronization probabilities, f_q, to the hadron B_q <cit.>, and 𝒩^ 𝒸ℴ𝓇𝓇 denote efficiency-corrected fitted signal yields. The yields are obtained from the mass fits, while simulation is used to evaluate the contribution to the efficiency from each stage of the selection except for the effect of the PID criteria.The latter is determined fromcalibration data samples of kinematically identified pions, kaons and protons originating from the decays (→), and and weighted according to the kinematics of the signal particles <cit.>. For each final state the efficiencies are determined as a function of the position inphase space, and efficiency corrections for each candidate are applied using the method of Ref. <cit.> to take the variation over the phase space into account.Explicitly, 𝒩^ corr = ∑_i 𝒲_i/ϵ_i, where the sum runs over the candidates in the fit, 𝒲_i is the for candidate i determined with the method <cit.> and ϵ_i is the efficiency for the candidate i which depends only on its position in the five-dimensional phase space. The visible branching fraction of the normalization mode, defined as () ×() ×(K^*(892)^0^+^-), is _ vis() = (1.68 ± 0.12) × 10^-6, where thebranching fraction is taken from Ref. <cit.> and the others from Ref. <cit.>.The branching fraction of each signal mode is reported in Table <ref>. The significance for themode is less than 3 σ; an upper limit on its branching fraction is found to be ℬ() < 6.6× 10^-7 , by integrating the likelihood after multiplying by a prior probability distribution that is uniform in the region of positive branching fraction. The values of the ratios of branching fractions between differentdecay modes are reported in Table <ref>. The signal distributions in m() and m() are obtained by subtracting the background using the technique <cit.>, with thecandidate invariant mass as the discriminating variable. Per-candidate weights are applied to correct for the variation of the selection efficiency over the phase space. Figure <ref> shows theinvariant mass distributions of the ,anddecay modes. A peak from a vector meson is identifiable in each mass spectrum, corresponding to a K^*(892)^0, a (1020) and a ρ(770)^0meson, respectively. Theinvariant mass distributions are also shown for the same decay modes. An enhancement near threshold, typical in baryonic decays <cit.>, is clearly visible in each case.Detailed amplitude analyses of thedecays will be of interest with larger samples.The sources of systematic uncertainty on the absolute branching fractions and on the ratios of branching fractions arise from the fit model, the knowledge of the efficiencies and, where appropriate, from the uncertainties on the branching fraction of the normalization mode and on the ratio of b-quark hadronization probabilities. Pseudoexperiments are used to estimate the effect of using alternative shapes for the fit components, or of including additional components in the fit. In particular, the effect of adding other cross-feed backgrounds, partially reconstructed backgrounds and components coming from decays have been investigated. These are the dominant sources of systematic uncertainty for theandmodes. The effect of fixing the yields of the cross-feed backgrounds based on the (mis)identification probabilities is also assessed by varying these probabilities within their uncertainties. Intrinsic biases in the fitted yields are investigated with pseudoexperiments and are found to be negligible.Uncertainties on the efficiencies arise due to the limited size of the simulation samples, the uncertainty on their evaluated distributions across the phase space of the decays andfrom possible residual differences between data and simulation. The unknown decay kinematics are the principal source of systematic uncertainty for themode, while for the ,andmodes the dominant source of systematic uncertainty comes from the uncertainty on the efficiency of the hardware stage of the trigger. As the efficiencies depend on the signal decay-time distribution,the effect coming from the different lifetimes of the mass eigenstates has been evaluated. The systematic uncertainties due to the vetoes of charm hadrons are also included.In summary, a search for the four-body charmless baryonic decayshas been carried out by the collaboration with a sample of proton-proton collision data corresponding to an integrated luminosity of 3.First observations are obtained for the decays , nonresonant ,and , while first evidence is reported for themode and an upper limit is set on thebranching fraction. In particular, four-body baryonic decays are observed for the first time and a threshold enhancement in the baryon-antibaryon mass spectra is confirmed for baryonic decays <cit.>.The collaboration has recently published studies of violation with four-bodydecays studying triple-product correlations, and presented first evidence for violation in baryons <cit.>. The decays of and mesons tofinal states reported in this paper may be used in the future for similar studies of violation in baryonic decays.§ ACKNOWLEDGEMENTS We express our gratitude to our colleagues in the CERN accelerator departments for the excellent performance of the LHC. We thank the technical and administrative staff at the LHCb institutes. We acknowledge support from CERN and from the national agencies: CAPES, CNPq, FAPERJ and FINEP (Brazil); MOST and NSFC (China); CNRS/IN2P3 (France); BMBF, DFG and MPG (Germany); INFN (Italy);NWO (The Netherlands); MNiSW and NCN (Poland); MEN/IFA (Romania);MinES and FASO (Russia); MinECo (Spain); SNSF and SER (Switzerland);NASU (Ukraine); STFC (United Kingdom); NSF (USA). We acknowledge the computing resources that are provided by CERN, IN2P3 (France), KIT and DESY (Germany), INFN (Italy), SURF (The Netherlands), PIC (Spain), GridPP (United Kingdom), RRCKI and Yandex LLC (Russia), CSCS (Switzerland), IFIN-HH (Romania), CBPF (Brazil), PL-GRID (Poland) and OSC (USA). We are indebted to the communities behind the multiple opensource software packages on which we depend. Individual groups or members have received support from AvH Foundation (Germany), EPLANET, Marie Skłodowska-Curie Actions and ERC (European Union),Conseil Général de Haute-Savoie, Labex ENIGMASS and OCEVU,Région Auvergne (France), RFBR and Yandex LLC (Russia), GVA, XuntaGal and GENCAT (Spain), Herchel Smith Fund, The Royal Society, Royal Commission for the Exhibition of 1851 and the Leverhulme Trust (United Kingdom).tocsectionReferences inbibliographytrue LHCb.bstmciteplus.sty has not been loaded This bibstyle requires the use of the mciteplus package. 10 n subitemmcitesubitemcount)LHCb-PAPER-2014-039 LHCb collaboration, R. Aaij et al., articletitlesFirst observation of a baryonicdecay, http://dx.doi.org/10.1103/PhysRevLett.113.152003Phys. Rev.Lett.113 (2014) 152003, http://arxiv.org/abs/1408.0971arXiv:1408.0971 LHCb-PAPER-2017-012 LHCb collaboration, R. Aaij et al., articletitlesFirst observation of a baryonic B^0_s decay, http://arxiv.org/abs/1704.07908arXiv:1704.07908, submitted to Phys. Rev. Lett Hou:2000bz W.-S. Hou and A. Soni, articletitlesPathways to rare baryonic B decays, http://dx.doi.org/10.1103/PhysRevLett.86.4247Phys. Rev. Lett.86 (2001) 4247, http://arxiv.org/abs/hep-ph/0008079arXiv:hep-ph/0008079 Bevan:2014iga and collaborations, A. J. Bevan et al., articletitlesThe physics of the B factories, http://dx.doi.org/10.1140/epjc/s10052-014-3026-9Eur. Phys. J.C74 (2014) 3026, http://arxiv.org/abs/1406.6311arXiv:1406.6311 LHCb-PAPER-2014-034 LHCb collaboration, R. Aaij et al., articletitlesEvidence forviolation in → decays, http://dx.doi.org/10.1103/PhysRevLett.113.141801Phys. Rev.Lett.113 (2014) 141801, http://arxiv.org/abs/1407.5907arXiv:1407.5907 Geng:2008ps C. Q. Geng and Y. K. Hsiao, articletitlesDirect CP and T violation in baryonic B decays, http://dx.doi.org/10.1142/S0217751X08041992Int. J. Mod. Phys.A23 (2008) 3290, http://arxiv.org/abs/0801.0022arXiv:0801.0022 Geng:2006jt C. Q. Geng, Y. K. Hsiao, and J. N. Ng, articletitlesDirect violation in B^±→ K^(*)±, http://dx.doi.org/10.1103/PhysRevLett.98.011801Phys. Rev. Lett. 98 (2007) 011801, http://arxiv.org/abs/hep-ph/0608328arXiv:hep-ph/0608328 Gronau:2011cf M. Gronau and J. L. Rosner, articletitlesTriple product asymmetries in K, D_(s) and B_(s) decays, http://dx.doi.org/10.1103/PhysRevD.84.096013Phys. Rev.D84 (2011) 096013, http://arxiv.org/abs/1107.1232arXiv:1107.1232 Aubert:2007qea BaBar collaboration, B. Aubert et al., articletitlesEvidence for the B^0→ p p K^*0 and B^+→η_c K^*+ decays and study of the decay dynamics of B meson decays into p p h final states, http://dx.doi.org/10.1103/PhysRevD.76.092004Phys. Rev.D76 (2007) 092004, http://arxiv.org/abs/0707.1648arXiv:0707.1648 Chen:2008jy Belle collaboration, J. H. Chen et al., articletitlesObservation of B^0→ p p K^*0 with a large K^*0 polarization, http://dx.doi.org/10.1103/PhysRevLett.100.251801Phys. Rev.Lett.100 (2008) 251801, http://arxiv.org/abs/0802.0336arXiv:0802.0336 PDG2016 Particle Data Group, C. Patrignani et al., articletitleshttp://pdg.lbl.gov/Review of particle physics, http://dx.doi.org/10.1088/1674-1137/40/10/100001Chin. Phys.C40 (2016) 100001 Bebek:1988jy CLEO collaboration, C. Bebek et al., articletitlesSearch for the charmless decays B → p p̅π and p p̅ππ, http://dx.doi.org/10.1103/PhysRevLett.62.8Phys. Rev. Lett.62 (1989) 8 Alves:2008zz LHCb collaboration, A. A. Alves Jr. et al., articletitlesThe detector at the LHC, http://dx.doi.org/10.1088/1748-0221/3/08/S08005JINST 3 (2008) S08005 LHCb-DP-2014-002 LHCb collaboration, R. Aaij et al., articletitlesLHCb detector performance, http://dx.doi.org/10.1142/S0217751X15300227Int. J. Mod. Phys.A30 (2015) 1530022, http://arxiv.org/abs/1412.6352arXiv:1412.6352 Sjostrand:2006za T. Sjöstrand, S. Mrenna, and P. Skands, articletitlesPYTHIA 6.4 physics and manual, http://dx.doi.org/10.1088/1126-6708/2006/05/026JHEP 05 (2006) 026, http://arxiv.org/abs/hep-ph/0603175arXiv:hep-ph/0603175 Sjostrand:2007gs T. Sjöstrand, S. Mrenna, and P. Skands, articletitlesA brief introduction to PYTHIA 8.1, http://dx.doi.org/10.1016/j.cpc.2008.01.036Comput. Phys.Commun.178 (2008) 852, http://arxiv.org/abs/0710.3820arXiv:0710.3820 LHCb-PROC-2010-056 I. Belyaev et al., articletitlesHandling of the generation of primary events in Gauss, the LHCb simulation framework, http://dx.doi.org/10.1088/1742-6596/331/3/032047J.Phys. Conf. Ser.331 (2011) 032047 Lange:2001uf D. J. Lange, articletitlesThe EvtGen particle decay simulation package, http://dx.doi.org/10.1016/S0168-9002(01)00089-4Nucl. Instrum.Meth.A462 (2001) 152 Golonka:2005pn P. Golonka and Z. Was, articletitlesPHOTOS Monte Carlo: A precision tool for QED corrections in Z and W decays, http://dx.doi.org/10.1140/epjc/s2005-02396-4Eur. Phys. J.C45 (2006) 97, http://arxiv.org/abs/hep-ph/0506026arXiv:hep-ph/0506026 Allison:2006ve Geant4 collaboration, J. Allison et al., articletitlesGeant4 developments and applications, http://dx.doi.org/10.1109/TNS.2006.869826IEEE Trans. Nucl. Sci.53 (2006) 270 Agostinelli:2002hh Geant4 collaboration, S. Agostinelli et al., articletitlesGeant4: A simulation toolkit, http://dx.doi.org/10.1016/S0168-9002(03)01368-8Nucl. Instrum.Meth.A506 (2003) 250 LHCb-PROC-2011-006 M. Clemencic et al., articletitlesThe simulation application, Gauss: Design, evolution and experience, http://dx.doi.org/10.1088/1742-6596/331/3/032023J. Phys. Conf.Ser.331 (2011) 032023 LHCb-DP-2012-004 R. Aaij et al., articletitlesThe trigger and its performance in 2011, http://dx.doi.org/10.1088/1748-0221/8/04/P04022JINST 8 (2013) P04022, http://arxiv.org/abs/1211.3055arXiv:1211.3055 BBDT V. V. Gligorov and M. Williams, articletitlesEfficient, reliable and fast high-level triggering using a bonsai boosted decision tree, http://dx.doi.org/10.1088/1748-0221/8/02/P02013JINST 8 (2013) P02013, http://arxiv.org/abs/1210.6861arXiv:1210.6861 Breiman L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stone, Classification and regression trees, Wadsworth international group, Belmont, California, USA, 1984 AdaBoost Y. Freund and R. E. Schapire, articletitlesA decision-theoretic generalization of on-line learning and an application to boosting, http://dx.doi.org/10.1006/jcss.1997.1504J. Comput.Syst. Sci.55 (1997) 119 LHCb-PAPER-2016-001 LHCb collaboration, R. Aaij et al., articletitlesSearch fordecays to thefinal state, http://dx.doi.org/10.1016/j.physletb.2016.05.074Phys. Lett.B759 (2016) 313, http://arxiv.org/abs/1603.07037arXiv:1603.07037 Skwarnicki:1986xj T. Skwarnicki, A study of the radiative cascade transitions between the Upsilon-prime and Upsilon resonances, PhD thesis, Institute of Nuclear Physics, Krakow, 1986, http://inspirehep.net/record/230779/DESY-F31-86-02 Aston:1987ir D. Aston et al., articletitlesA study of K^-π^+ scattering in the reaction K^- p → K^-π^+ n at 11 GeV/c, http://dx.doi.org/10.1016/0550-3213(88)90028-4Nucl. Phys.B296 (1988) 493 Aubert:2008zza BaBar collaboration, B. Aubert et al., articletitlesTime-dependent and time-integrated angular analysis of B→ϕ and ϕ K^±π^∓, http://dx.doi.org/10.1103/PhysRevD.78.092008Phys. Rev.D78 (2008) 092008, http://arxiv.org/abs/0808.3586arXiv:0808.3586 Wilks:1938dza S. S. Wilks, articletitlesThe large-sample distribution of the likelihood ratio for testing composite hypotheses, http://dx.doi.org/10.1214/aoms/1177732360Ann. Math. Stat.9 (1938) 60 fsfd LHCb collaboration, R. Aaij et al., articletitlesMeasurement of the fragmentation fraction ratio f_s/f_d and its dependence on B meson kinematics, http://dx.doi.org/10.1007/JHEP04(2013)001JHEP 04 (2013) 001, http://arxiv.org/abs/1301.5286arXiv:1301.5286, f_s/f_d value updated in https://cds.cern.ch/record/1559262LHCb-CONF-2013-011 LHCb-DP-2012-003 M. Adinolfi et al., articletitlesPerformance of the RICH detector at the LHC, http://dx.doi.org/10.1140/epjc/s10052-013-2431-9Eur. Phys. J.C73 (2013) 2431, http://arxiv.org/abs/1211.6759arXiv:1211.6759 Anderlini:2202412 L. Anderlini et al., articletitlesThe PIDCalib package,http://cdsweb.cern.ch/search?p=LHCb-PUB-2016-021 f=reportnumber action_search=Search c=LHCb+Notes LHCb-PUB-2016-021 LHCb-PAPER-2012-018 LHCb collaboration, R. Aaij et al., articletitlesObservation of → and evidence for →, http://dx.doi.org/10.1103/PhysRevLett.109.131801Phys. Rev.Lett.109 (2012) 131801, http://arxiv.org/abs/1207.5991arXiv:1207.5991 Pivk:2004ty M. Pivk and F. R. Le Diberder, articletitlessPlot: A statistical tool to unfold data distributions, http://dx.doi.org/10.1016/j.nima.2005.08.106Nucl. Instrum.Meth.A555 (2005) 356, http://arxiv.org/abs/physics/0402083arXiv:physics/0402083 Chilikin:2014bkk Belle collaboration, K. Chilikin et al., articletitlesObservation of a new charged charmoniumlike state in → decays, http://dx.doi.org/10.1103/PhysRevD.90.112009Phys. Rev.D90 (2014) 112009, http://arxiv.org/abs/1408.6457arXiv:1408.6457 LHCb-PAPER-2016-030 LHCb collaboration, R. Aaij et al., articletitlesMeasurement of matter-antimatter differences in beauty baryon decays, http://dx.doi.org/10.1038/nphys4021Nature Physics 13 (2017) 391, http://arxiv.org/abs/1609.05216arXiv:1609.05216LHCb collaboration R. Aaij^40, B. Adeva^39, M. Adinolfi^48, Z. Ajaltouni^5, S. Akar^59, J. Albrecht^10, F. Alessio^40, M. Alexander^53, S. Ali^43, G. Alkhazov^31, P. Alvarez Cartelle^55, A.A. Alves Jr^59, S. Amato^2, S. Amerio^23, Y. Amhis^7, L. An^3, L. Anderlini^18, G. Andreassi^41, M. Andreotti^17,g, J.E. Andrews^60, R.B. Appleby^56, F. Archilli^43, P. d'Argent^12, J. Arnau Romeu^6, A. Artamonov^37, M. Artuso^61, E. Aslanides^6, G. Auriemma^26, M. Baalouch^5, I. Babuschkin^56, S. Bachmann^12, J.J. Back^50, A. Badalov^38, C. Baesso^62, S. Baker^55, V. Balagura^7,c, W. Baldini^17, A. Baranov^35, R.J. Barlow^56, C. Barschel^40, S. Barsuk^7, W. Barter^56, F. Baryshnikov^32, M. Baszczyk^27,l, V. Batozskaya^29, B. Batsukh^61, V. Battista^41, A. Bay^41, L. Beaucourt^4, J. Beddow^53, F. Bedeschi^24, I. Bediaga^1, A. Beiter^61, L.J. Bel^43, V. Bellee^41, N. Belloli^21,i, K. Belous^37, I. Belyaev^32, E. Ben-Haim^8, G. Bencivenni^19, S. Benson^43, S. Beranek^9, A. Berezhnoy^33, R. Bernet^42, A. Bertolin^23, C. Betancourt^42, F. Betti^15, M.-O. Bettler^40, M. van Beuzekom^43, Ia. Bezshyiko^42, S. Bifani^47, P. Billoir^8, A. Birnkraut^10, A. Bitadze^56, A. Bizzeti^18,u, T. Blake^50, F. Blanc^41, J. Blouw^11,†, S. Blusk^61, V. Bocci^26, T. Boettcher^58, A. Bondar^36,w, N. Bondar^31, W. Bonivento^16, I. Bordyuzhin^32, A. Borgheresi^21,i, S. Borghi^56, M. Borisyak^35, M. Borsato^39, F. Bossu^7, M. Boubdir^9, T.J.V. Bowcock^54, E. Bowen^42, C. Bozzi^17,40, S. Braun^12, T. Britton^61, J. Brodzicka^56, E. Buchanan^48, C. Burr^56, A. Bursche^16, J. Buytaert^40, S. Cadeddu^16, R. Calabrese^17,g, M. Calvi^21,i, M. Calvo Gomez^38,m, A. Camboni^38, P. Campana^19, D.H. Campora Perez^40, L. Capriotti^56, A. Carbone^15,e, G. Carboni^25,j, R. Cardinale^20,h, A. Cardini^16, P. Carniti^21,i, L. Carson^52, K. Carvalho Akiba^2, G. Casse^54, L. Cassina^21,i, L. Castillo Garcia^41, M. Cattaneo^40, G. Cavallero^20,40,h, R. Cenci^24,t, D. Chamont^7, M. Charles^8, Ph. Charpentier^40, G. Chatzikonstantinidis^47, M. Chefdeville^4, S. Chen^56, S.F. Cheung^57, V. Chobanova^39, M. Chrzaszcz^42,27, A. Chubykin^31, X. Cid Vidal^39, G. Ciezarek^43, P.E.L. Clarke^52, M. Clemencic^40, H.V. Cliff^49, J. Closier^40, V. Coco^59, J. Cogan^6, E. Cogneras^5, V. Cogoni^16,f, L. Cojocariu^30, P. Collins^40, A. Comerma-Montells^12, A. Contu^40, A. Cook^48, G. Coombs^40, S. Coquereau^38, G. Corti^40, M. Corvo^17,g, C.M. Costa Sobral^50, B. Couturier^40, G.A. Cowan^52, D.C. Craik^52, A. Crocombe^50, M. Cruz Torres^62, S. Cunliffe^55, R. Currie^52, C. D'Ambrosio^40, F. Da Cunha Marinho^2, E. Dall'Occo^43, J. Dalseno^48, A. Davis^3, K. De Bruyn^6, S. De Capua^56, M. De Cian^12, J.M. De Miranda^1, L. De Paula^2, M. De Serio^14,d, P. De Simone^19, C.T. Dean^53, D. Decamp^4, M. Deckenhoff^10, L. Del Buono^8, H.-P. Dembinski^11, M. Demmer^10, A. Dendek^28, D. Derkach^35, O. Deschamps^5, F. Dettori^54, B. Dey^22, A. Di Canto^40, P. Di Nezza^19, H. Dijkstra^40, F. Dordei^40, M. Dorigo^41, A. Dosil Suárez^39, A. Dovbnya^45, K. Dreimanis^54, L. Dufour^43, G. Dujany^56, K. Dungs^40, P. Durante^40, R. Dzhelyadin^37, M. Dziewiecki^12, A. Dziurda^40, A. Dzyuba^31, N. Déléage^4, S. Easo^51, M. Ebert^52, U. Egede^55, V. Egorychev^32, S. Eidelman^36,w, S. Eisenhardt^52, U. Eitschberger^10, R. Ekelhof^10, L. Eklund^53, S. Ely^61, S. Esen^12, H.M. Evans^49, T. Evans^57, A. Falabella^15, N. Farley^47, S. Farry^54, R. Fay^54, D. Fazzini^21,i, D. Ferguson^52, G. Fernandez^38, A. Fernandez Prieto^39, F. Ferrari^15, F. Ferreira Rodrigues^2, M. Ferro-Luzzi^40, S. Filippov^34, R.A. Fini^14, M. Fiore^17,g, M. Fiorini^17,g, M. Firlej^28, C. Fitzpatrick^41, T. Fiutowski^28, F. Fleuret^7,b, K. Fohl^40, M. Fontana^16,40, F. Fontanelli^20,h, D.C. Forshaw^61, R. Forty^40, V. Franco Lima^54, M. Frank^40, C. Frei^40, J. Fu^22,q, W. Funk^40, E. Furfaro^25,j, C. Färber^40, A. Gallas Torreira^39, D. Galli^15,e, S. Gallorini^23, S. Gambetta^52, M. Gandelman^2, P. Gandini^57, Y. Gao^3, L.M. Garcia Martin^69, J. García Pardiñas^39, J. Garra Tico^49, L. Garrido^38, P.J. Garsed^49, D. Gascon^38, C. Gaspar^40, L. Gavardi^10, G. Gazzoni^5, D. Gerick^12, E. Gersabeck^12, M. Gersabeck^56, T. Gershon^50, Ph. Ghez^4, S. Gianì^41, V. Gibson^49, O.G. Girard^41, L. Giubega^30, K. Gizdov^52, V.V. Gligorov^8, D. Golubkov^32, A. Golutvin^55,40, A. Gomes^1,a, I.V. Gorelov^33, C. Gotti^21,i, E. Govorkova^43, R. Graciani Diaz^38, L.A. Granado Cardoso^40, E. Graugés^38, E. Graverini^42, G. Graziani^18, A. Grecu^30, R. Greim^9, P. Griffith^16, L. Grillo^21,40,i, B.R. Gruberg Cazon^57, O. Grünberg^67, E. Gushchin^34, Yu. Guz^37, T. Gys^40, C. Göbel^62, T. Hadavizadeh^57, C. Hadjivasiliou^5, G. Haefeli^41, C. Haen^40, S.C. Haines^49, B. Hamilton^60, X. Han^12, S. Hansmann-Menzemer^12, N. Harnew^57, S.T. Harnew^48, J. Harrison^56, M. Hatch^40, J. He^63, T. Head^41, A. Heister^9, K. Hennessy^54, P. Henrard^5, L. Henry^69, E. van Herwijnen^40, M. Heß^67, A. Hicheur^2, D. Hill^57, C. Hombach^56, P.H. Hopchev^41, Z.-C. Huard^59, W. Hulsbergen^43, T. Humair^55, M. Hushchyn^35, D. Hutchcroft^54, M. Idzik^28, P. Ilten^58, R. Jacobsson^40, J. Jalocha^57, E. Jans^43, A. Jawahery^60, F. Jiang^3, M. John^57, D. Johnson^40, C.R. Jones^49, C. Joram^40, B. Jost^40, N. Jurik^57, S. Kandybei^45, M. Karacson^40, J.M. Kariuki^48, S. Karodia^53, M. Kecke^12, M. Kelsey^61, M. Kenzie^49, T. Ketel^44, E. Khairullin^35, B. Khanji^12, C. Khurewathanakul^41, T. Kirn^9, S. Klaver^56, K. Klimaszewski^29, T. Klimkovich^11, S. Koliiev^46, M. Kolpin^12, I. Komarov^41, R. Kopecna^12, P. Koppenburg^43, A. Kosmyntseva^32, S. Kotriakhova^31, M. Kozeiha^5, L. Kravchuk^34, M. Kreps^50, P. Krokovny^36,w, F. Kruse^10, W. Krzemien^29, W. Kucewicz^27,l, M. Kucharczyk^27, V. Kudryavtsev^36,w, A.K. Kuonen^41, K. Kurek^29, T. Kvaratskheliya^32,40, D. Lacarrere^40, G. Lafferty^56, A. Lai^16, G. Lanfranchi^19, C. Langenbruch^9, T. Latham^50, C. Lazzeroni^47, R. Le Gac^6, J. van Leerdam^43, A. Leflat^33,40, J. Lefrançois^7, R. Lefèvre^5, F. Lemaitre^40, E. Lemos Cid^39, O. Leroy^6, T. Lesiak^27, B. Leverington^12, T. Li^3, Y. Li^7, Z. Li^61, T. Likhomanenko^35,68, R. Lindner^40, F. Lionetto^42, X. Liu^3, D. Loh^50, I. Longstaff^53, J.H. Lopes^2, D. Lucchesi^23,o, M. Lucio Martinez^39, H. Luo^52, A. Lupato^23, E. Luppi^17,g, O. Lupton^40, A. Lusiani^24, X. Lyu^63, F. Machefert^7, F. Maciuc^30, O. Maev^31, K. Maguire^56, S. Malde^57, A. Malinin^68, T. Maltsev^36, G. Manca^16,f, G. Mancinelli^6, P. Manning^61, J. Maratas^5,v, J.F. Marchand^4, U. Marconi^15, C. Marin Benito^38, M. Marinangeli^41, P. Marino^24,t, J. Marks^12, G. Martellotti^26, M. Martin^6, M. Martinelli^41, D. Martinez Santos^39, F. Martinez Vidal^69, D. Martins Tostes^2, L.M. Massacrier^7, A. Massafferri^1, R. Matev^40, A. Mathad^50, Z. Mathe^40, C. Matteuzzi^21, A. Mauri^42, E. Maurice^7,b, B. Maurin^41, A. Mazurov^47, M. McCann^55,40, A. McNab^56, R. McNulty^13, B. Meadows^59, F. Meier^10, D. Melnychuk^29, M. Merk^43, A. Merli^22,40,q, E. Michielin^23, D.A. Milanes^66, M.-N. Minard^4, D.S. Mitzel^12, A. Mogini^8, J. Molina Rodriguez^1, I.A. Monroy^66, S. Monteil^5, M. Morandin^23, M.J. Morello^24,t, O. Morgunova^68, J. Moron^28, A.B. Morris^52, R. Mountain^61, F. Muheim^52, M. Mulder^43, M. Mussini^15, D. Müller^56, J. Müller^10, K. Müller^42, V. Müller^10, P. Naik^48, T. Nakada^41, R. Nandakumar^51, A. Nandi^57, I. Nasteva^2, M. Needham^52, N. Neri^22,40, S. Neubert^12, N. Neufeld^40, M. Neuner^12, T.D. Nguyen^41, C. Nguyen-Mau^41,n, S. Nieswand^9, R. Niet^10, N. Nikitin^33, T. Nikodem^12, A. Nogay^68, A. Novoselov^37, D.P. O'Hanlon^50, A. Oblakowska-Mucha^28, V. Obraztsov^37, S. Ogilvy^19, R. Oldeman^16,f, C.J.G. Onderwater^70, A. Ossowska^27, J.M. Otalora Goicochea^2, P. Owen^42, A. Oyanguren^69, P.R. Pais^41, A. Palano^14,d, M. Palutan^19,40, A. Papanestis^51, M. Pappagallo^14,d, L.L. Pappalardo^17,g, C. Pappenheimer^59, W. Parker^60, C. Parkes^56, G. Passaleva^18, A. Pastore^14,d, M. Patel^55, C. Patrignani^15,e, A. Pearce^40, A. Pellegrino^43, G. Penso^26, M. Pepe Altarelli^40, S. Perazzini^40, P. Perret^5, L. Pescatore^41, K. Petridis^48, A. Petrolini^20,h, A. Petrov^68, M. Petruzzo^22,q, E. Picatoste Olloqui^38, B. Pietrzyk^4, M. Pikies^27, D. Pinci^26, A. Pistone^20,h, A. Piucci^12, V. Placinta^30, S. Playfer^52, M. Plo Casasus^39, T. Poikela^40, F. Polci^8, M. Poli Lener^19, A. Poluektov^50,36, I. Polyakov^61, E. Polycarpo^2, G.J. Pomery^48, S. Ponce^40, A. Popov^37, D. Popov^11,40, B. Popovici^30, S. Poslavskii^37, C. Potterat^2, E. Price^48, J. Prisciandaro^39, C. Prouve^48, V. Pugatch^46, A. Puig Navarro^42, G. Punzi^24,p, C. Qian^63, W. Qian^50, R. Quagliani^7,48, B. Rachwal^28, J.H. Rademacker^48, M. Rama^24, M. Ramos Pernas^39, M.S. Rangel^2, I. Raniuk^45,†, F. Ratnikov^35, G. Raven^44, F. Redi^55, S. Reichert^10, A.C. dos Reis^1, C. Remon Alepuz^69, V. Renaudin^7, S. Ricciardi^51, S. Richards^48, M. Rihl^40, K. Rinnert^54, V. Rives Molina^38, P. Robbe^7, A.B. Rodrigues^1, E. Rodrigues^59, J.A. Rodriguez Lopez^66, P. Rodriguez Perez^56,†, A. Rogozhnikov^35, S. Roiser^40, A. Rollings^57, V. Romanovskiy^37, A. Romero Vidal^39, J.W. Ronayne^13, M. Rotondo^19, M.S. Rudolph^61, T. Ruf^40, P. Ruiz Valls^69, J.J. Saborido Silva^39, E. Sadykhov^32, N. Sagidova^31, B. Saitta^16,f, V. Salustino Guimaraes^1, D. Sanchez Gonzalo^38, C. Sanchez Mayordomo^69, B. Sanmartin Sedes^39, R. Santacesaria^26, C. Santamarina Rios^39, M. Santimaria^19, E. Santovetti^25,j, A. Sarti^19,k, C. Satriano^26,s, A. Satta^25, D.M. Saunders^48, D. Savrina^32,33, S. Schael^9, M. Schellenberg^10, M. Schiller^53, H. Schindler^40, M. Schlupp^10, M. Schmelling^11, T. Schmelzer^10, B. Schmidt^40, O. Schneider^41, A. Schopper^40, H.F. Schreiner^59, K. Schubert^10, M. Schubiger^41, M.-H. Schune^7, R. Schwemmer^40, B. Sciascia^19, A. Sciubba^26,k, A. Semennikov^32, A. Sergi^47, N. Serra^42, J. Serrano^6, L. Sestini^23, P. Seyfert^21, M. Shapkin^37, I. Shapoval^45, Y. Shcheglov^31, T. Shears^54, L. Shekhtman^36,w, V. Shevchenko^68, B.G. Siddi^17,40, R. Silva Coutinho^42, L. Silva de Oliveira^2, G. Simi^23,o, S. Simone^14,d, M. Sirendi^49, N. Skidmore^48, T. Skwarnicki^61, E. Smith^55, I.T. Smith^52, J. Smith^49, M. Smith^55, l. Soares Lavra^1, M.D. Sokoloff^59, F.J.P. Soler^53, B. Souza De Paula^2, B. Spaan^10, P. Spradlin^53, S. Sridharan^40, F. Stagni^40, M. Stahl^12, S. Stahl^40, P. Stefko^41, S. Stefkova^55, O. Steinkamp^42, S. Stemmle^12, O. Stenyakin^37, H. Stevens^10, S. Stoica^30, S. Stone^61, B. Storaci^42, S. Stracka^24,p, M.E. Stramaglia^41, M. Straticiuc^30, U. Straumann^42, L. Sun^64, W. Sutcliffe^55, K. Swientek^28, V. Syropoulos^44, M. Szczekowski^29, T. Szumlak^28, S. T'Jampens^4, A. Tayduganov^6, T. Tekampe^10, G. Tellarini^17,g, F. Teubert^40, E. Thomas^40, J. van Tilburg^43, M.J. Tilley^55, V. Tisserand^4, M. Tobin^41, S. Tolk^49, L. Tomassetti^17,g, D. Tonelli^24, S. Topp-Joergensen^57, F. Toriello^61, R. Tourinho Jadallah Aoude^1, E. Tournefier^4, S. Tourneur^41, K. Trabelsi^41, M. Traill^53, M.T. Tran^41, M. Tresch^42, A. Trisovic^40, A. Tsaregorodtsev^6, P. Tsopelas^43, A. Tully^49, N. Tuning^43, A. Ukleja^29, A. Ustyuzhanin^35, U. Uwer^12, C. Vacca^16,f, V. Vagnoni^15,40, A. Valassi^40, S. Valat^40, G. Valenti^15, R. Vazquez Gomez^19, P. Vazquez Regueiro^39, S. Vecchi^17, M. van Veghel^43, J.J. Velthuis^48, M. Veltri^18,r, G. Veneziano^57, A. Venkateswaran^61, T.A. Verlage^9, M. Vernet^5, M. Vesterinen^12, J.V. Viana Barbosa^40, B. Viaud^7, D.  Vieira^63, M. Vieites Diaz^39, H. Viemann^67, X. Vilasis-Cardona^38,m, M. Vitti^49, V. Volkov^33, A. Vollhardt^42, B. Voneki^40, A. Vorobyev^31, V. Vorobyev^36,w, C. Voß^9, J.A. de Vries^43, C. Vázquez Sierra^39, R. Waldi^67, C. Wallace^50, R. Wallace^13, J. Walsh^24, J. Wang^61, D.R. Ward^49, H.M. Wark^54, N.K. Watson^47, D. Websdale^55, A. Weiden^42, M. Whitehead^40, J. Wicht^50, G. Wilkinson^57,40, M. Wilkinson^61, M. Williams^40, M.P. Williams^47, M. Williams^58, T. Williams^47, F.F. Wilson^51, J. Wimberley^60, M.A. Winn^7, J. Wishahi^10, W. Wislicki^29, M. Witek^27, G. Wormser^7, S.A. Wotton^49, K. Wraight^53, K. Wyllie^40, Y. Xie^65, Z. Xu^4, Z. Yang^3, Z. Yang^60, Y. Yao^61, H. Yin^65, J. Yu^65, X. Yuan^36,w, O. Yushchenko^37, K.A. Zarebski^47, M. Zavertyaev^11,c, L. Zhang^3, Y. Zhang^7, A. Zhelezov^12, Y. Zheng^63, X. Zhu^3, V. Zhukov^33, S. Zucchelli^15.^1Centro Brasileiro de Pesquisas Físicas (CBPF), Rio de Janeiro, Brazil^2Universidade Federal do Rio de Janeiro (UFRJ), Rio de Janeiro, Brazil^3Center for High Energy Physics, Tsinghua University, Beijing, China^4LAPP, Université Savoie Mont-Blanc, CNRS/IN2P3, Annecy-Le-Vieux, France^5Clermont Université, Université Blaise Pascal, CNRS/IN2P3, LPC, Clermont-Ferrand, France^6CPPM, Aix-Marseille Université, CNRS/IN2P3, Marseille, France^7LAL, Université Paris-Sud, CNRS/IN2P3, Orsay, France^8LPNHE, Université Pierre et Marie Curie, Université Paris Diderot, CNRS/IN2P3, Paris, France^9I. Physikalisches Institut, RWTH Aachen University, Aachen, Germany^10Fakultät Physik, Technische Universität Dortmund, Dortmund, Germany^11Max-Planck-Institut für Kernphysik (MPIK), Heidelberg, Germany^12Physikalisches Institut, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany^13School of Physics, University College Dublin, Dublin, Ireland^14Sezione INFN di Bari, Bari, Italy^15Sezione INFN di Bologna, Bologna, Italy^16Sezione INFN di Cagliari, Cagliari, Italy^17Universita e INFN, Ferrara, Ferrara, Italy^18Sezione INFN di Firenze, Firenze, Italy^19Laboratori Nazionali dell'INFN di Frascati, Frascati, Italy^20Sezione INFN di Genova, Genova, Italy^21Universita & INFN, Milano-Bicocca, Milano, Italy^22Sezione di Milano, Milano, Italy^23Sezione INFN di Padova, Padova, Italy^24Sezione INFN di Pisa, Pisa, Italy^25Sezione INFN di Roma Tor Vergata, Roma, Italy^26Sezione INFN di Roma La Sapienza, Roma, Italy^27Henryk Niewodniczanski Institute of Nuclear PhysicsPolish Academy of Sciences, Kraków, Poland^28AGH - University of Science and Technology, Faculty of Physics and Applied Computer Science, Kraków, Poland^29National Center for Nuclear Research (NCBJ), Warsaw, Poland^30Horia Hulubei National Institute of Physics and Nuclear Engineering, Bucharest-Magurele, Romania^31Petersburg Nuclear Physics Institute (PNPI), Gatchina, Russia^32Institute of Theoretical and Experimental Physics (ITEP), Moscow, Russia^33Institute of Nuclear Physics, Moscow State University (SINP MSU), Moscow, Russia^34Institute for Nuclear Research of the Russian Academy of Sciences (INR RAN), Moscow, Russia^35Yandex School of Data Analysis, Moscow, Russia^36Budker Institute of Nuclear Physics (SB RAS), Novosibirsk, Russia^37Institute for High Energy Physics (IHEP), Protvino, Russia^38ICCUB, Universitat de Barcelona, Barcelona, Spain^39Universidad de Santiago de Compostela, Santiago de Compostela, Spain^40European Organization for Nuclear Research (CERN), Geneva, Switzerland^41Institute of Physics, Ecole PolytechniqueFédérale de Lausanne (EPFL), Lausanne, Switzerland^42Physik-Institut, Universität Zürich, Zürich, Switzerland^43Nikhef National Institute for Subatomic Physics, Amsterdam, The Netherlands^44Nikhef National Institute for Subatomic Physics and VU University Amsterdam, Amsterdam, The Netherlands^45NSC Kharkiv Institute of Physics and Technology (NSC KIPT), Kharkiv, Ukraine^46Institute for Nuclear Research of the National Academy of Sciences (KINR), Kyiv, Ukraine^47University of Birmingham, Birmingham, United Kingdom^48H.H. Wills Physics Laboratory, University of Bristol, Bristol, United Kingdom^49Cavendish Laboratory, University of Cambridge, Cambridge, United Kingdom^50Department of Physics, University of Warwick, Coventry, United Kingdom^51STFC Rutherford Appleton Laboratory, Didcot, United Kingdom^52School of Physics and Astronomy, University of Edinburgh, Edinburgh, United Kingdom^53School of Physics and Astronomy, University of Glasgow, Glasgow, United Kingdom^54Oliver Lodge Laboratory, University of Liverpool, Liverpool, United Kingdom^55Imperial College London, London, United Kingdom^56School of Physics and Astronomy, University of Manchester, Manchester, United Kingdom^57Department of Physics, University of Oxford, Oxford, United Kingdom^58Massachusetts Institute of Technology, Cambridge, MA, United States^59University of Cincinnati, Cincinnati, OH, United States^60University of Maryland, College Park, MD, United States^61Syracuse University, Syracuse, NY, United States^62Pontifícia Universidade Católica do Rio de Janeiro (PUC-Rio), Rio de Janeiro, Brazil, associated to ^2^63University of Chinese Academy of Sciences, Beijing, China, associated to ^3^64School of Physics and Technology, Wuhan University, Wuhan, China, associated to ^3^65Institute of Particle Physics, Central China Normal University, Wuhan, Hubei, China, associated to ^3^66Departamento de Fisica , Universidad Nacional de Colombia, Bogota, Colombia, associated to ^8^67Institut für Physik, Universität Rostock, Rostock, Germany, associated to ^12^68National Research Centre Kurchatov Institute, Moscow, Russia, associated to ^32^69Instituto de Fisica Corpuscular, Centro Mixto Universidad de Valencia - CSIC, Valencia, Spain, associated to ^38^70Van Swinderen Institute, University of Groningen, Groningen, The Netherlands, associated to ^43^aUniversidade Federal do Triângulo Mineiro (UFTM), Uberaba-MG, Brazil^bLaboratoire Leprince-Ringuet, Palaiseau, France^cP.N. Lebedev Physical Institute, Russian Academy of Science (LPI RAS), Moscow, Russia^dUniversità di Bari, Bari, Italy^eUniversità di Bologna, Bologna, Italy^fUniversità di Cagliari, Cagliari, Italy^gUniversità di Ferrara, Ferrara, Italy^hUniversità di Genova, Genova, Italy^iUniversità di Milano Bicocca, Milano, Italy^jUniversità di Roma Tor Vergata, Roma, Italy^kUniversità di Roma La Sapienza, Roma, Italy^lAGH - University of Science and Technology, Faculty of Computer Science, Electronics and Telecommunications, Kraków, Poland^mLIFAELS, La Salle, Universitat Ramon Llull, Barcelona, Spain^nHanoi University of Science, Hanoi, Viet Nam^oUniversità di Padova, Padova, Italy^pUniversità di Pisa, Pisa, Italy^qUniversità degli Studi di Milano, Milano, Italy^rUniversità di Urbino, Urbino, Italy^sUniversità della Basilicata, Potenza, Italy^tScuola Normale Superiore, Pisa, Italy^uUniversità di Modena e Reggio Emilia, Modena, Italy^vIligan Institute of Technology (IIT), Iligan, Philippines^wNovosibirsk State University, Novosibirsk, Russia^†Deceased
http://arxiv.org/abs/1704.08497v2
{ "authors": [ "LHCb collaboration", "R. Aaij", "B. Adeva", "M. Adinolfi", "Z. Ajaltouni", "S. Akar", "J. Albrecht", "F. Alessio", "M. Alexander", "S. Ali", "G. Alkhazov", "P. Alvarez Cartelle", "A. A. Alves Jr", "S. Amato", "S. Amerio", "Y. Amhis", "L. An", "L. Anderlini", "G. Andreassi", "M. Andreotti", "J. E. Andrews", "R. B. Appleby", "F. Archilli", "P. d'Argent", "J. Arnau Romeu", "A. Artamonov", "M. Artuso", "E. Aslanides", "G. Auriemma", "M. Baalouch", "I. Babuschkin", "S. Bachmann", "J. J. Back", "A. Badalov", "C. Baesso", "S. Baker", "V. Balagura", "W. Baldini", "A. Baranov", "R. J. Barlow", "C. Barschel", "S. Barsuk", "W. Barter", "F. Baryshnikov", "M. Baszczyk", "V. Batozskaya", "B. Batsukh", "V. Battista", "A. Bay", "L. Beaucourt", "J. Beddow", "F. Bedeschi", "I. Bediaga", "A. Beiter", "L. J. Bel", "V. Bellee", "N. Belloli", "K. Belous", "I. Belyaev", "E. Ben-Haim", "G. Bencivenni", "S. Benson", "S. Beranek", "A. Berezhnoy", "R. Bernet", "A. Bertolin", "C. Betancourt", "F. Betti", "M. -O. Bettler", "M. van Beuzekom", "Ia. Bezshyiko", "S. Bifani", "P. Billoir", "A. Birnkraut", "A. Bitadze", "A. Bizzeti", "T. Blake", "F. Blanc", "J. Blouw", "S. Blusk", "V. Bocci", "T. Boettcher", "A. Bondar", "N. Bondar", "W. Bonivento", "I. Bordyuzhin", "A. Borgheresi", "S. Borghi", "M. Borisyak", "M. Borsato", "F. Bossu", "M. Boubdir", "T. J. V. Bowcock", "E. Bowen", "C. Bozzi", "S. Braun", "T. Britton", "J. Brodzicka", "E. Buchanan", "C. Burr", "A. Bursche", "J. Buytaert", "S. Cadeddu", "R. Calabrese", "M. Calvi", "M. Calvo Gomez", "A. Camboni", "P. Campana", "D. H. Campora Perez", "L. Capriotti", "A. Carbone", "G. Carboni", "R. Cardinale", "A. Cardini", "P. Carniti", "L. Carson", "K. Carvalho Akiba", "G. Casse", "L. Cassina", "L. Castillo Garcia", "M. Cattaneo", "G. Cavallero", "R. Cenci", "D. Chamont", "M. Charles", "Ph. Charpentier", "G. Chatzikonstantinidis", "M. Chefdeville", "S. Chen", "S. F. Cheung", "V. Chobanova", "M. Chrzaszcz", "A. Chubykin", "X. Cid Vidal", "G. Ciezarek", "P. E. L. Clarke", "M. Clemencic", "H. V. Cliff", "J. Closier", "V. Coco", "J. Cogan", "E. Cogneras", "V. Cogoni", "L. Cojocariu", "P. Collins", "A. Comerma-Montells", "A. Contu", "A. Cook", "G. Coombs", "S. Coquereau", "G. Corti", "M. Corvo", "C. M. Costa Sobral", "B. Couturier", "G. A. Cowan", "D. C. Craik", "A. Crocombe", "M. Cruz Torres", "S. Cunliffe", "R. Currie", "C. D'Ambrosio", "F. Da Cunha Marinho", "E. Dall'Occo", "J. Dalseno", "A. Davis", "K. De Bruyn", "S. De Capua", "M. De Cian", "J. M. De Miranda", "L. De Paula", "M. De Serio", "P. De Simone", "C. T. Dean", "D. Decamp", "M. Deckenhoff", "L. Del Buono", "H. -P. Dembinski", "M. Demmer", "A. Dendek", "D. Derkach", "O. Deschamps", "F. Dettori", "B. Dey", "A. Di Canto", "P. Di Nezza", "H. Dijkstra", "F. Dordei", "M. Dorigo", "A. Dosil Suárez", "A. Dovbnya", "K. Dreimanis", "L. Dufour", "G. Dujany", "K. Dungs", "P. Durante", "R. Dzhelyadin", "M. Dziewiecki", "A. Dziurda", "A. Dzyuba", "N. Déléage", "S. Easo", "M. Ebert", "U. Egede", "V. Egorychev", "S. Eidelman", "S. Eisenhardt", "U. Eitschberger", "R. Ekelhof", "L. Eklund", "S. Ely", "S. Esen", "H. M. Evans", "T. Evans", "A. Falabella", "N. Farley", "S. Farry", "R. Fay", "D. Fazzini", "D. Ferguson", "G. Fernandez", "A. Fernandez Prieto", "F. Ferrari", "F. Ferreira Rodrigues", "M. Ferro-Luzzi", "S. Filippov", "R. A. Fini", "M. Fiore", "M. Fiorini", "M. Firlej", "C. Fitzpatrick", "T. Fiutowski", "F. Fleuret", "K. Fohl", "M. Fontana", "F. Fontanelli", "D. C. Forshaw", "R. Forty", "V. Franco Lima", "M. Frank", "C. Frei", "J. Fu", "W. Funk", "E. Furfaro", "C. Färber", "A. Gallas Torreira", "D. Galli", "S. Gallorini", "S. Gambetta", "M. Gandelman", "P. Gandini", "Y. Gao", "L. M. Garcia Martin", "J. García Pardiñas", "J. Garra Tico", "L. Garrido", "P. J. Garsed", "D. Gascon", "C. Gaspar", "L. Gavardi", "G. Gazzoni", "D. Gerick", "E. Gersabeck", "M. Gersabeck", "T. Gershon", "Ph. Ghez", "S. Gianì", "V. Gibson", "O. G. Girard", "L. Giubega", "K. Gizdov", "V. V. Gligorov", "D. Golubkov", "A. Golutvin", "A. Gomes", "I. V. Gorelov", "C. Gotti", "E. Govorkova", "R. Graciani Diaz", "L. A. Granado Cardoso", "E. Graugés", "E. Graverini", "G. Graziani", "A. Grecu", "R. Greim", "P. Griffith", "L. Grillo", "B. R. Gruberg Cazon", "O. Grünberg", "E. Gushchin", "Yu. Guz", "T. Gys", "C. Göbel", "T. Hadavizadeh", "C. Hadjivasiliou", "G. Haefeli", "C. Haen", "S. C. Haines", "B. Hamilton", "X. Han", "S. Hansmann-Menzemer", "N. Harnew", "S. T. Harnew", "J. Harrison", "M. Hatch", "J. He", "T. Head", "A. Heister", "K. Hennessy", "P. Henrard", "L. Henry", "E. van Herwijnen", "M. Heß", "A. Hicheur", "D. Hill", "C. Hombach", "P. H. Hopchev", "Z. -C. Huard", "W. Hulsbergen", "T. Humair", "M. Hushchyn", "D. Hutchcroft", "M. Idzik", "P. Ilten", "R. Jacobsson", "J. Jalocha", "E. Jans", "A. Jawahery", "F. Jiang", "M. John", "D. Johnson", "C. R. Jones", "C. Joram", "B. Jost", "N. Jurik", "S. Kandybei", "M. Karacson", "J. M. Kariuki", "S. Karodia", "M. Kecke", "M. Kelsey", "M. Kenzie", "T. Ketel", "E. Khairullin", "B. Khanji", "C. Khurewathanakul", "T. Kirn", "S. Klaver", "K. Klimaszewski", "T. Klimkovich", "S. Koliiev", "M. Kolpin", "I. Komarov", "R. Kopecna", "P. Koppenburg", "A. Kosmyntseva", "S. Kotriakhova", "M. Kozeiha", "L. Kravchuk", "M. Kreps", "P. Krokovny", "F. Kruse", "W. Krzemien", "W. Kucewicz", "M. Kucharczyk", "V. Kudryavtsev", "A. K. Kuonen", "K. Kurek", "T. Kvaratskheliya", "D. Lacarrere", "G. Lafferty", "A. Lai", "G. Lanfranchi", "C. Langenbruch", "T. Latham", "C. Lazzeroni", "R. Le Gac", "J. van Leerdam", "A. Leflat", "J. Lefrançois", "R. Lefèvre", "F. Lemaitre", "E. Lemos Cid", "O. Leroy", "T. Lesiak", "B. Leverington", "T. Li", "Y. Li", "Z. Li", "T. Likhomanenko", "R. Lindner", "F. Lionetto", "X. Liu", "D. Loh", "I. Longstaff", "J. H. Lopes", "D. Lucchesi", "M. Lucio Martinez", "H. Luo", "A. Lupato", "E. Luppi", "O. Lupton", "A. Lusiani", "X. Lyu", "F. Machefert", "F. Maciuc", "O. Maev", "K. Maguire", "S. Malde", "A. Malinin", "T. Maltsev", "G. Manca", "G. Mancinelli", "P. Manning", "J. Maratas", "J. F. Marchand", "U. Marconi", "C. Marin Benito", "M. Marinangeli", "P. Marino", "J. Marks", "G. Martellotti", "M. Martin", "M. Martinelli", "D. Martinez Santos", "F. Martinez Vidal", "D. Martins Tostes", "L. M. Massacrier", "A. Massafferri", "R. Matev", "A. Mathad", "Z. Mathe", "C. Matteuzzi", "A. Mauri", "E. Maurice", "B. Maurin", "A. Mazurov", "M. McCann", "A. McNab", "R. McNulty", "B. Meadows", "F. Meier", "D. Melnychuk", "M. Merk", "A. Merli", "E. Michielin", "D. A. Milanes", "M. -N. Minard", "D. S. Mitzel", "A. Mogini", "J. Molina Rodriguez", "I. A. Monroy", "S. Monteil", "M. Morandin", "M. J. Morello", "O. Morgunova", "J. Moron", "A. B. Morris", "R. Mountain", "F. Muheim", "M. Mulder", "M. Mussini", "D. Müller", "J. Müller", "K. Müller", "V. Müller", "P. Naik", "T. Nakada", "R. Nandakumar", "A. Nandi", "I. Nasteva", "M. Needham", "N. Neri", "S. Neubert", "N. Neufeld", "M. Neuner", "T. D. Nguyen", "C. Nguyen-Mau", "S. Nieswand", "R. Niet", "N. Nikitin", "T. Nikodem", "A. Nogay", "A. Novoselov", "D. P. O'Hanlon", "A. Oblakowska-Mucha", "V. Obraztsov", "S. Ogilvy", "R. Oldeman", "C. J. G. Onderwater", "A. Ossowska", "J. M. Otalora Goicochea", "P. Owen", "A. Oyanguren", "P. R. Pais", "A. Palano", "M. Palutan", "A. Papanestis", "M. Pappagallo", "L. L. Pappalardo", "C. Pappenheimer", "W. Parker", "C. Parkes", "G. Passaleva", "A. Pastore", "M. Patel", "C. Patrignani", "A. Pearce", "A. Pellegrino", "G. Penso", "M. Pepe Altarelli", "S. Perazzini", "P. Perret", "L. Pescatore", "K. Petridis", "A. Petrolini", "A. Petrov", "M. Petruzzo", "E. Picatoste Olloqui", "B. Pietrzyk", "M. Pikies", "D. Pinci", "A. Pistone", "A. Piucci", "V. Placinta", "S. Playfer", "M. Plo Casasus", "T. Poikela", "F. Polci", "M. Poli Lener", "A. Poluektov", "I. Polyakov", "E. Polycarpo", "G. J. Pomery", "S. Ponce", "A. Popov", "D. Popov", "B. Popovici", "S. Poslavskii", "C. Potterat", "E. Price", "J. Prisciandaro", "C. Prouve", "V. Pugatch", "A. Puig Navarro", "G. Punzi", "C. Qian", "W. Qian", "R. Quagliani", "B. Rachwal", "J. H. Rademacker", "M. Rama", "M. Ramos Pernas", "M. S. Rangel", "I. Raniuk", "F. Ratnikov", "G. Raven", "F. Redi", "S. Reichert", "A. C. dos Reis", "C. Remon Alepuz", "V. Renaudin", "S. Ricciardi", "S. Richards", "M. Rihl", "K. Rinnert", "V. Rives Molina", "P. Robbe", "A. B. Rodrigues", "E. Rodrigues", "J. A. Rodriguez Lopez", "P. Rodriguez Perez", "A. Rogozhnikov", "S. Roiser", "A. Rollings", "V. Romanovskiy", "A. Romero Vidal", "J. W. Ronayne", "M. Rotondo", "M. S. Rudolph", "T. Ruf", "P. Ruiz Valls", "J. J. Saborido Silva", "E. Sadykhov", "N. Sagidova", "B. Saitta", "V. Salustino Guimaraes", "D. Sanchez Gonzalo", "C. Sanchez Mayordomo", "B. Sanmartin Sedes", "R. Santacesaria", "C. Santamarina Rios", "M. Santimaria", "E. Santovetti", "A. Sarti", "C. Satriano", "A. Satta", "D. M. Saunders", "D. Savrina", "S. Schael", "M. Schellenberg", "M. Schiller", "H. Schindler", "M. Schlupp", "M. Schmelling", "T. Schmelzer", "B. Schmidt", "O. Schneider", "A. Schopper", "H. F. Schreiner", "K. Schubert", "M. Schubiger", "M. -H. Schune", "R. Schwemmer", "B. Sciascia", "A. Sciubba", "A. Semennikov", "A. Sergi", "N. Serra", "J. Serrano", "L. Sestini", "P. Seyfert", "M. Shapkin", "I. Shapoval", "Y. Shcheglov", "T. Shears", "L. Shekhtman", "V. Shevchenko", "B. G. Siddi", "R. Silva Coutinho", "L. Silva de Oliveira", "G. Simi", "S. Simone", "M. Sirendi", "N. Skidmore", "T. Skwarnicki", "E. Smith", "I. T. Smith", "J. Smith", "M. Smith", "l. Soares Lavra", "M. D. Sokoloff", "F. J. P. Soler", "B. Souza De Paula", "B. Spaan", "P. Spradlin", "S. Sridharan", "F. Stagni", "M. Stahl", "S. Stahl", "P. Stefko", "S. Stefkova", "O. Steinkamp", "S. Stemmle", "O. Stenyakin", "H. Stevens", "S. Stoica", "S. Stone", "B. Storaci", "S. Stracka", "M. E. Stramaglia", "M. Straticiuc", "U. Straumann", "L. Sun", "W. Sutcliffe", "K. Swientek", "V. Syropoulos", "M. Szczekowski", "T. Szumlak", "S. T'Jampens", "A. Tayduganov", "T. Tekampe", "G. Tellarini", "F. Teubert", "E. Thomas", "J. van Tilburg", "M. J. Tilley", "V. Tisserand", "M. Tobin", "S. Tolk", "L. Tomassetti", "D. Tonelli", "S. Topp-Joergensen", "F. Toriello", "R. Tourinho Jadallah Aoude", "E. Tournefier", "S. Tourneur", "K. Trabelsi", "M. Traill", "M. T. Tran", "M. Tresch", "A. Trisovic", "A. Tsaregorodtsev", "P. Tsopelas", "A. Tully", "N. Tuning", "A. Ukleja", "A. Ustyuzhanin", "U. Uwer", "C. Vacca", "V. Vagnoni", "A. Valassi", "S. Valat", "G. Valenti", "R. Vazquez Gomez", "P. Vazquez Regueiro", "S. Vecchi", "M. van Veghel", "J. J. Velthuis", "M. Veltri", "G. Veneziano", "A. Venkateswaran", "T. A. Verlage", "M. Vernet", "M. Vesterinen", "J. V. Viana Barbosa", "B. Viaud", "D. Vieira", "M. Vieites Diaz", "H. Viemann", "X. Vilasis-Cardona", "M. Vitti", "V. Volkov", "A. Vollhardt", "B. Voneki", "A. Vorobyev", "V. Vorobyev", "C. Voß", "J. A. de Vries", "C. Vázquez Sierra", "R. Waldi", "C. Wallace", "R. Wallace", "J. Walsh", "J. Wang", "D. R. Ward", "H. M. Wark", "N. K. Watson", "D. Websdale", "A. Weiden", "M. Whitehead", "J. Wicht", "G. Wilkinson", "M. Wilkinson", "M. Williams", "M. P. Williams", "M. Williams", "T. Williams", "F. F. Wilson", "J. Wimberley", "M. A. Winn", "J. Wishahi", "W. Wislicki", "M. Witek", "G. Wormser", "S. A. Wotton", "K. Wraight", "K. Wyllie", "Y. Xie", "Z. Xu", "Z. Yang", "Z. Yang", "Y. Yao", "H. Yin", "J. Yu", "X. Yuan", "O. Yushchenko", "K. A. Zarebski", "M. Zavertyaev", "L. Zhang", "Y. Zhang", "A. Zhelezov", "Y. Zheng", "X. Zhu", "V. Zhukov", "S. Zucchelli" ], "categories": [ "hep-ex" ], "primary_category": "hep-ex", "published": "20170427102600", "title": "Observation of charmless baryonic decays $B^0_{(s)} \\to p \\overline{p} h^+ h^{\\prime-}$" }
1]Michael ElkinThis research was supported by the ISF grant No. (724/15). 1]Ofer NeimanSupported in part by ISF grant No. (523/12) and by BSF grant No. 2015813.[1]Department of Computer Science, Ben-Gurion University of the Negev, Beer-Sheva, Israel. Email:Linear-Size Hopsets with Small Hopbound, andDistributed Routing with Low Memory [===================================================================================For a positive parameter β,the β-bounded distance between a pair of vertices u,v in a weighted undirected graph G = (V,E,ω) is the length of the shortest u-v path in G with at most β edges, aka hops. For β as above and > 0, a (β,)-hopset of G = (V,E,ω) is a graph G' =(V,H,ω_H) on the same vertex set, such that all distances in G are (1+)-approximated by β-bounded distances in G ∪ G'.Hopsets are a fundamental graph-theoretic and graph-algorithmic construct, and they are widely used for distance-related problems in a variety of computational settings. Currently existing constructions of hopsetsproduce hopsets either with Ω(n log n) edges, or with a hopbound n^Ω(1). In this paper we devise a construction of linear-size hopsets with hopbound(ignoring the dependence on ) (log n)^log^(3) n + O(1).This improves the previous bound almost exponentially.We also devise efficient implementations of our construction in PRAM and distributed settings. The only existing PRAM algorithm <cit.> for computing hopsets with a constant (i.e., independent of n) hopbound requires n^Ω(1) time. We devise a PRAM algorithm with polylogarithmic running time for computing hopsets with a constant hopbound, i.e., our running time is exponentially better than the previous one.Moreover, these hopsets are also significantly sparser than their counterparts from<cit.>.We use our hopsets to devise a distributed routing scheme that exhibits near-optimal tradeoff between individual memory requirement (n^1/k) of vertices throughout preprocessing and routing phases of thealgorithm, and stretch O(k), along with a near-optimal construction time ≈ D + n^1/2 + 1/k, where D is the hop-diameter of the input graph. Previous distributed routing algorithms either sufferedfrom a prohibitively large memory requirement Ω(√(n)), or had a near-linear construction time, even on graphs with small hop-diameter D.empty § INTRODUCTION §.§ HopsetsConsider a weighted undirected graph G = (V,E,ω). Consider another graph G_H = (V,H,ω_H) on the same vertex set, that satisfies that for every (u,v) ∈ H, ω_H(u,v) ≥ d_G(u,v), where d_G(u,v) stands for the distance between u and v in G. For a positive integer parameter β, and a positive parameter > 0, the graph G_H is called a (β,)-hopset of G, if for every pair of vertices u,v ∈ V, the β-bounded distance d^(β)_G ∪ G_H(u,v) between u and v in G ∪ G_H is within a factor 1 + from the distance d_G(u,v) between these vertices in G, i.e., d_G(u,v)≤d_G ∪ G_H^(β)(u,w) ≤ (1+ ) d_G(u,v). The β-bounded distance between u and v in G is the length of the shortest β-bounded u-v-pathΠ_u,v in G, i.e., of a shortest path Π_u,v with at most β hops/edges.Hopsets constitute an important algorithmic and combinatorial object, and they are extremely useful for approximate distance computations in a large variety of computational settings, including the distributed model <cit.>, parallel (PRAM) model <cit.>, streaming model <cit.>,dynamic setting <cit.>, and for routing <cit.>. The notion of hopsets was coined in a seminal paper of Cohen <cit.>. (Though some first implicit constructions appeared a little bit earlier <cit.>. In <cit.>, Cohen also devised landmark constructions of hopsets. Specifically, she showed that for any parameters κ = 1,2,…, and > 0, and any n-vertex graph G, there exists a (β,)-hopset with O(n^1+1/κ·log n) edges, with β = O(log n)^O(logκ). Moreover, she showed that hopsets with comparable attributes can be efficiently constructed in the centralized and parallel settings. Specifically, in the centralized setting her algorithm takes an additional parameter ρ > 0, and constructs a hopset of size O(n^1+1/κ·log n) edges in O(|E| n^ρ) time, with hopbound β = O(log n)^O(logκ)ρ. In the PRAM model, her hopset has size O(n^1+1/κ) · O(log n)^O(logκρ), and β is as above, and it is constructed in roughly O(β) (i.e., polylogarithmic time), and with O(|E| · n^ρ) work.Cohen<cit.> also raised the open question of existence and efficient constructability of hopsets with better attributes; she called it an “intriguing research problem". In the two decades that passed since Cohen's work <cit.>, numerous algorithms for constructing hopsets in various settings were devised <cit.>. The hopsets of <cit.> are no better than those of <cit.> in terms of their attributes. (But they are constructed in settings to which Cohen's algorithm is not known to apply.) The algorithm of <cit.> builds hopsets of size O(n), but with a large hopbound β = n^Ω(1). In <cit.>, the current authors showed that there always exist (β,)-hopsets of size O(n^1+1/κ·log n), with constant (i.e., independent of n) hopbound β = O(logκ)^logκ +O(1). Abboud et al. <cit.> showed a lower bound β = Ω(1 logκ)^logκ on the hopbound of hopsets with size O(n^1+1/κ).In the PRAM model, <cit.> showed two results. First, that for parameters ,ρ,ζ > 0 and κ = 1,2,…, hopsets with constant hopbound β = O(logκ + 1/ρζ)^logκ + 2/ρ + O(1) can be constructed in time O(n^ζ·β), using O(|E| · n^ρ + ζ) work. Second, <cit.> devised a PRAM algorithm with polylogarithmic time O(log n)^logκ + 1/ρ + O(1), albeit with a polylogarithmic β roughly equal to the running time. The hopset's size is O(n^1+1/κ·log n) in the second result as well.These results of <cit.> strictly outperformed the longstanding tradeoff of <cit.> in all regimes, and proved existence of hopsets with constant hopbound. However, they left a significant room for improvement.First, the hopsets of <cit.>have Ω(n log n) edges in all regimes. As a result, the only currently existing sparser hopsets <cit.> have hopbound of n^Ω(1). Hence it was left open in <cit.> if hopsets with size o(n log n) and hopbound n^o(1) exist. Second, the questionwhether hopsets with constant hopbound can be constructed in polylogarithmic PRAM time was left open in <cit.>. Indeed, the hopsets' algorithm of <cit.> for constructing such hopsets requires n^Ω(1) PRAM time.In this paper we answer both these questions in the affirmative. Specifically, we show that for any κ = 1,2,…, there exists a (β(κ,),)-hopset, for all > 0 simultaneously, with size O(n^1+1/κ) and β = β(κ,) = O(logκ)^logκ + O(1).In particular, by setting κ = log n, we obtain a linear-size hopset, and its hopbound is β = O(loglog n)^loglog n + O(1). This is an almost exponential improvement of the previously best known upper bound (due to <cit.>) on the hopbound of linear-size hopsets.Second, in the PRAM setting, for any κ = 1,2,…, > 0, and ρ > 0, our algorithm constructs hopsets of size O(n^1+1/κ·log^* n), constant hopbound β = O((logκ + 1/ρ)^2 )^logκ + 1/ρ + O(1), in polylogarithmic time O((log n)/)^logκ + 1/ρ + O(1), using work O(|E| · n^ρ). This is an exponential improvement of the parallel running time of the previously best-known algorithm for constructing hopsets with constant hop-bound due to <cit.>.We can also shave the log^* n factor in the size, that allows for a linear-size hopset, but then β grows to be roughly the running time. See Table <ref> for a concise comparison between existing and new results concerning hopsets in the PRAM model.Our algorithm also provides improved results for constructing hopsets in distributed CONGEST and Congested Clique models (see sec:dist for definition of these models). In all these models, our algorithm constructs linear-size hopsets. Also, the running time of our algorithms in all these models is purely combinatorial, i.e., it does not depend on the aspect ratio Λ of the graph. [The aspect ratio Λ of a graph G is given by Λ = max_u,v ∈ V d_G(u,v)min_u,v∈ V, u≠ v d_G(u,v).] In contrast, previous algorithms <cit.> for constructing hopsets in the CONGEST model all have running time proportional to logΛ.§.§ Distributed Routing with Small MemoryThe main application of our novel hopsets' construction is to the problem of distributed construction of compact routing schemes. A routing scheme has two main phases: the preprocessing phase, and the routing phase. In the preprocessing phase, each vertex is assigned a routing table and a routing label.[In this paper we only consider labeled or name-dependent routing, in which vertices are assigned labels by the scheme. There is also a large body of literature on name-independent routing schemes; cf. <cit.> and the references therein. However, a lower bound <cit.> shows that constructing a name-independent routing scheme with stretch ρ requires Ω̃(n/ρ^2) time in the CONGEST model. See also <cit.> for lower bounds on the communication complexity of the preprocessing phase of distributed routing.] In the routing phase, a vertex u gets a message M with a short header 𝐻𝑒𝑎𝑑𝑒𝑟(M) and with a destination label 𝐿𝑎𝑏𝑒𝑙(v) of a vertex v, and based on its routing table 𝑇𝑎𝑏𝑙𝑒(u), on 𝐿𝑎𝑏𝑒𝑙(v), and on 𝐻𝑒𝑎𝑑𝑒𝑟(M),the vertex u decides to which neighbor x ∈Γ(u) to forward the message M, and which header to attach to the message. The stretch of a routing scheme is the worst-case ratio between the length of a path on which a message M travels, and the graph distance between the message's origin and destination.Due to its both theoretical and practical appeal, routing is a central problem in distributed graph algorithms <cit.>. A landmark routing scheme was devised in <cit.>. For an integer k ≥ 1, the stretch of their scheme is 4k-5, the tables are of size O(n^1/klog^1-1/k n),the labels are of size O(klog n), and the headers are of size O(log n). Chechik <cit.> improved this result, and devised a scheme with stretch 3.68 k, and other parameters like in <cit.>.An active thread of research <cit.> focuses on efficient implementation of the preprocessing phaseof routing in the distributed CONGEST model, i.e., computing compact tables and short labels that enable for future low-stretch routing. This problem was raised in a seminal paper by Awerbuch, Bar-Noy, Linial and Peleg <cit.>, who devised a routing scheme with stretch 2^O(k), overall memory requirement (n^1+1/k),[Õ(̃f̃(̃ñ)̃-notation hides polylogarithmic in f(n) factors.] individual memory requirement for a vertex v of ((v) + n^1/k), and construction time (n^1+1/k) (in the CONGEST model).The “individual memory requirement" parameter encapsulates the routing tables and labels, and the memory used while computing the tables and labels.Lenzen and Patt-Shamir <cit.> devised a distributed routing scheme (based on <cit.>) with stretch 4k-3 +o(1), tables of size O(n^1/klog n), labels of size O(k log n), individual memory requirement of (n^1/k), and construction time (S+n^1/k), where S is the shortest-path diameter of the input graph G, i.e., the maximum number of hops in a shortest path between a pair of vertices in G. Though S is often much smaller than n, it is desirable to evaluate complexity measures of distributed algorithms in terms of n and D, where D is the hop-diameter of G, defined as the maximum distance between a pair of vertices u,v in the underlying unweighted graph of G.Typically, we have D ≪ S ≪ n, and it is always the case that D ≤ S ≤ n. (See Peleg's book <cit.> for a comprehensive discussion.) Lenzen and Patt-Shamir <cit.> also devised a routing scheme with tables of size Õ(n^1/2+1/k), labels of size O(log n·log k), stretch at most O(klog k), and has running time of Õ(n^1/2+1/k+D) rounds. They (based on <cit.>) also showed a lower bound of Ω̃(D + √(n)) on the time required to constructa routing scheme. In a follow-up paper, <cit.> showed how to improve the stretch of the above scheme to O(k). The main drawback of this result is the prohibitively large size of the routing tables. (The individual memory requirement is consequently prohibitively large as well.) They also exhibited a different tradeoff, that overcame the issue of large routing tables. They devised an algorithm that produced routing tables of size O(n^1/k·log^2n), labels of size O(klog^2n) and stretch 4k-3+o(1),albeit with sub-optimal running time (min{ (nD)^1/2 n^1/k,n^2/3 +D}) ·logΛ, and no guarantee on the individual memory requirement during the preprocessing phase. In <cit.>, the current authors improved the bounds of <cit.>. In the current state-of-the-art scheme <cit.>, the stretch is 4k-5 + o(1), the tables and labels are of the same size as in <cit.> (i.e., O(n^1/klog^2 n) and O(k log^2 n), respectively), the construction time is O((n^1/2 + 1/k+D) ·min{(log n)^O(k),2^(√(log n))}·logΛ. (A similar, though slightly weaker, result was achieved by <cit.>.)Still there is no meaningful guarantee on the individual memory requirement in the preprocessing phase.See Table <ref> for a concise summary of existing bounds, and a comparison with our new results.To summarize, all currently existing distributed routing algorithms with nearly-optimal running time≈ D + n^1/2+1/k suffer from three issues. First, they provide no meaningful guarantee on individual memory requirement on vertices in the preprocessing phase; second, their preprocessing time is not purely combinatorial, but rather depends linearly on logΛ; and third, their tables and labels sizes are roughlyO(log n) off from the respective tables and labels' sizes of Thorup-Zwick's sequential construction <cit.>.The issue of individual memory requirement was indeed explicitly raised by Lenzen <cit.> in a private communication with the authors. He wrote (the stress on “during" is in the origin): “One annoying thing about this is that there is a huge amount of storage required *during* the construction. It seems odd that the nodes can hold only small tables, but should have large memory during the construction. I think it's an interesting question whether we can have a good construction using Õ(n^1/k) memory only. A hop set may be the wrong option for this, because reflecting the distance structure of the skeleton accurately cannot be done by a sparse graph; on the other hand, maybe there's some distributed representation cleverly distributing the information over the graph nodes?" Based on our novel hopsets' construction, we devise an algorithm that addresses all theseissues. Specifically, the stretch of our scheme is 4k-5 +o(1), the sizes of tables and labels essentially match the respective sizes of Thorup-Zwick's construction, i.e., they are O(n^1/klog n) and O(k log n), respectively.Our construction time is (n^1/2+1/k + D)·(log n)^O(k), i.e., it is purely combinatorial. Most importantly, the individual memory requirement is at most (n^1/k). Moreover, we can reduce the running time to (n^1/2+1/k + D)·min{(log n)^O(k), 2^(√(log n))}, while the individual memory increases slightly to max{(n^1/k),2^(√(log n))}. In particular, we can have a polylogarithmic individual memory requirement and construction time O((n^1/2 +D) · n^), for an arbitrarily small constant > 0.Distributed Tree Routing: An important ingredient in the existing distributed routing schemes <cit.> for general graphs, and in our new routing scheme, is a distributed tree routing scheme. Thorup and Zwick <cit.> showed that with routing tables of size O(1) and labels of size O(log n), one can have an exact (i.e., no stretch) tree routing. <cit.> showed that in (D+ √(n)) time, one can construct exact tree routing with tables and labels of size O(log n) andO(log^2 n), respectively, i.e., there is an overhead of log n in both parameters with respect to Thorup-Zwick's sequential construction. In this paper we improve this result, and devise a (D+ √(n))-time algorithm that constructs tree-routing tables and labels of sizes that match the sequential construction of Thorup and Zwick, i.e., of size O(1) and O(log n), respectively. Moreover, if one is interested in a scheme that always routes via the root of the tree, as is the case in the application to routing in general networks, then our algorithm for constructing tables and labels that supports this requires only a small (O(log n)) individual memory in each vertex.§.§ Technical Overview Cohen's algorithm <cit.> is based ona subroutine for constructing pairwise covers <cit.>, i.e., collections of small-radii clusters with small maximum overlap (no vertex belongs to too many clusters). The algorithm is a top-down recursive procedure: it interconnects large clusters of the cover via hopset edges, and recurses in small clusters. To keep the overall overlap of all recursion levels in check, Cohen used the radius parameter O(log n) for the covers. This resulted in a hop-bound, which is at least polylogarithmic in n.Cohen's hopset is also built separately for each distance scale [2^i,2^i+1), i = 0,1,…,logΛ, and the ultimate hopset is the union of all these single-scale hopsets.The hopset's construction of <cit.>, (due to the current authors), also builds a single-scale hopset for each distance scale, and then takes their union as an ultimate hopset. The construction of single-scale hopsets in <cit.> is based upon ideas from the construction of (1+,β)-spanners of <cit.> for unweighted graphs. It starts with a partition of the vertex set V into singleton clusters _0 = {v}| v ∈ V}, and alternates superclustering and interconnection steps. In a superclustering step some of the clusters of the current partition are merged into larger clusters (of _i+1), while the other clusters are interconnected with one another via hopset edges.This approach (of <cit.>) enabled us to prove existence of hopsets with constant hop-bound, but it appears to be uncapable of producing hopsets of size o(n log n).Indeed, even if each single-scale hopset is of linear size (which is indeed the case in <cit.>), their union is doomed to be of size Ω(n log n). Moreover, in parallel and distributed settings, one produces a hopset i+1 based upon a hopset of scale i. This results in accumulation of stretch from 1+ to (1+ )^log n. To alleviate this issue, one needs to rescale . However,then the hopbound grows from constant to polylogarithmic. To get around this, <cit.> used a smaller number of scales, and this indeed enables <cit.> to construct hopsets with constant hopbound in these settings, albeit the running time becomes proportional to the ratio between consequent scales, i.e., it becomes n^Ω(1).The closest to our current construction of hopsets is the line of research of <cit.>, which is based on a construction of distance oracles due to Thorup and Zwick <cit.>. To construct their oracles, <cit.> used a hierarchy of sets V = A_0 ⊇ A_1 ⊇… A_κ-1⊇ A_κ = ∅, where each vertex of A_i, for all i = 0,1,…,κ-2, is sampled with probability n^-1/κ for the inclusion into A_i+1. For each vertex u ∈ V,<cit.> defined for every i = 0,1,…,κ-1, the ith pivot p_i(u) to be its closest vertex in A_i, and theith bunch B_i(u) = {v | d_G(u,v) < d_G(u,A_i+1)}∪{p_i+1(u)}, (for i=κ, let {p_κ(u)} = ∅), and the entire bunch B(u) = ⋃_i=0^κ-1 B_i(u). They also defined the dual sets, clusters, C(v) = {u | v ∈ B(u)}. Bernstein and others<cit.> used this construction with κ = Θ̃(√(log n)), and built Thorup-Zwick clusters with respect to 2^(√(log n))-bounded distances. As a result, they obtained a so-called 2^(√(log n))-bounded hopset, i.e., a hopset which takes care only of pairs u,v ∈ V of vertices that admit a 2^(√(log n))-bounded shortest path. They then used this bounded hopset in a certain recursive fashion (see the so-called hop reduction of Nanongkai <cit.>), to obtain their ultimate hopset.Thorup-Zwick's construction with κ =Θ̃(√(log n)) alone introduces into the hopset n · 2^Θ̃(√(log n)) edges, and thus, such a hopset cannot be very sparse. In addition, the recursive application of the hop reduction technique results in a hopbound of 2^Ω̃(√(log n)).Our construction of hopsets is based upon a construction of Thorup-Zwick's emulators[A graph G' = (V,E',ω') is called a sublinear-error emulator of an unweighted graph G = (V,E), if for every pair of vertices u,v ∈ V, we have d_G(u,v)≤ d_G'(u,v) ≤ d_G(u,v) + α(d_G(u,v)) for some sub-linear stretch function α. If G' is a subgraph of G, it is called a sublinear-errorspanner of G.], from a different paper by Thorup and Zwick <cit.>. Specifically, to obtain the hierarchy V = A_0 ⊇ A_1 ⊇… A_logκ-1⊇ A_logκ = ∅, one samples each vertex of A_i, for i=0,1,…,logκ-2, with probability roughly n^-2^i/κ for inclusion in A_i+1. Then one defines the bunch of a vertex u ∈ A_i as B(u) ={v ∈ A_i | d_G(v,u) < d_G(v,A_i+1)}∪{p_i+1(u)}, and setsH  = ⋃_u ∈ V{(u,v) | v ∈ B(u)} . For unweighted graphs G, <cit.> showed that H given by (<ref>) is an additive emulator with stretch α(d)=O(logκ· d^1-1/(logκ-1)) and O(logκ· n^1+1/κ) edges . By a different proof argument, we show that the very same construction provides also a (β,)-hopset of the same size and with β=O(logκ)^logκ-1, for all ϵ>0 simultaneously. Moreover, by adjusting the sampling probabilities, we also shave the logκ factor from both the hopset's and the emulator's size, while increasing the exponent of β by 1. (This also gives rise to the first linear-size emulator with sub-linear additive stretch for unweighted graphs.)As a result, we obtain a construction of hopsets, which is by far simpler than the previous Thorup-Zwick-based constructions of hopsets <cit.>. As was discussed above, it also provides hopsets with much better parameters, and it is more adaptable to efficient implementation in various computational settings. Our construction is also much simpler than the constructions of <cit.>, which are not based on Thorup-Zwick's hierarchy.Parallel and distributed implementations of our hopset's construction proceed in scalesℓ = 0,1,…,log n, where on scale ℓ the algorithm constructs a 2^ℓ-bounded hopset. This is different from the situation in <cit.>, where scale ℓ takes care of distances in the range [2^ℓ,2^ℓ+1).An important advantage of this is that we no longer need to take the union of all single-scale hopsets into ourhopset; rather we just take the largest-scale hopset as our ultimate hopset. This saves a factor of log n in the size, and enables us to construct linear-size hopsets in parallel and distributed settings. The fact that we do not work with distance scales, but rather with hop-distance-scales, makes it possible to avoid the dependence on logΛ in the distributed construction time, and to achieve a purely combinatorial running time.All previous distributed algorithms for constructing approximate hopsets <cit.> have running time proportional to logΛ. (A distributed construction of an exact hopset <cit.> by the first-named author, however, avoids this dependence too. Alas, it has a much higher running time.)The fact that the construction's scales are with respect to hop-distances, as opposed to actual distances, enables us to essentially avoid accumulation of error. This is done by the following recursive procedure.First, we build 2^ℓ-bounded hopsets H^(ℓ), ℓ = 0,1,…, log n, and set H(1) = H^(log n) to be the highest-scale hopset. This process involves accumulation of stretch, and after rescaling the stretch parameter , the hopset H(1) ends up having polylogarithmic hopbound β_1. Its construction time is roughly β_1, i.e., polylogarithmic as well. Now we add the hopset into the original graph, and recurse on G ∪ H(1). Note that now we only need to process logβ_1 ≈loglog n scales, rather than log n ones. Hence the accumulation of stretch in the resulting hopset H(2) is much more mild than in H(1). As a result, after rescaling the stretch parameter , the hopbound β_2 of H(2) is roughly 𝑝𝑜𝑙𝑦(loglog n). By repeating this recursive process for log^* n iterations, we eventually achieve a hopset with constant hop-bound in parallel polylogarithmic time. As was mentioned above, this dramatically improves the n^Ω(1) parallel time required in <cit.> to construct a hopset with constant hopbound.In the context of distributed routing, like in <cit.>, our hopset H is constructedon top of a virtual (aka skeleton) graph G' = (V',E'), where V' is a collection of ≈√(n) vertices, sampled from the original vertex set V independently with probability ≈ n^-1/2. There is an edge (u',v') ∈ E' iff there is a Õ(√(n))-bounded u'-v' path in G. However, since we aim to design an algorithm in which vertices employ only a small memory during the preprocessing phase, we cannot afford computing the virtual graph G'. Rather, somewhat surprisingly, we show that the hopset H for G' can be constructed without ever constructing G' itself! We only compute those edges of G', which are required for constructing the hopset H.Note, however, that unlike a spanner or a low-stretch spanning tree, hopset is always used in conjunction with the graph for which it was constructed. In other words, to compute the Thorup-Zwick routing scheme for the virtual graph G', we conduct Bellman-Ford explorations in G' ∪ H. So, at the first glance, it seems necessary to eventually compute G', for being able to conduct these Bellman-Ford explorations.We cut this gordian knot by computing only those edges of G' that are really needed for computing either the hopset H or the TZ routing scheme for G' ∪ H. This turns out to be (typically) a small fraction of edges of G', and those edges can be computed much more efficiently than the entire G', and using much smaller memory. This idea also enables us to compute lengths of these edges of G' precisely, as opposed to approximately, as it was done in <cit.>. This simplifies the analysis of the resulting scheme. We note that the idea of computing a hopset H without first computing the underlying virtual graph G', and conducting Bellman-Ford explorations in G' ∪ H without ever computing G' in its entirety appeared in a recent work <cit.>, by the first-named author.<cit.> constructed an exact hopset of <cit.> with a polynomial hopbound. However, the exact hopset is a much simpler structure than the small-hopbound approximate hopset that we construct here. Showing that this idea is applicable for our new approximate small-hopbound hopset is technicallysubstantially more challenging.Another crucial idea that we employ to guarantee a small individual memory requirement is ensuring that our hopset H has small arboricity, i.e., that its edges can be oriented in such a way that every vertex has only a small out-degree. This out-degree is proportional to the ultimate individual memory requirement of our algorithm. §.§ OrganizationOur linear-size hopsets appear in sec:hop, and the distributed construction in sec:CC for Congested Clique and sec:CONGEST for the CONGEST model. The hopsets in the PRAM model are presented in sec:pram. Finally, in sec:route we describe our distributed routing scheme with small memory, and the distributed tree routing in sec:tree-route.§ LINEAR SIZE HOPSETS Let G=(V,E) be a weighted graph, and fix a parameter k≥ 1. Let ν=1/(2^k-1) (one should think of κ=1/ν). Let V=A_0⊇ A_1⊇…⊇ A_k=∅ be a sequence of sets, such that for all 0≤ i<k-1, A_i+1 is created by sampling every element from A_i independently with probability p_i=n^-2^i·ν.[Our definition is slightly different than that of <cit.>, which used p_i=|A_i|/n^1+ν, but it gives rise to the same expected size of A_i. We use our version since it allows efficient implementation in various models of computation.] It follows that for 0≤ i≤ k-1 we haveN_i:=[|A_i|]=n·∏_j=0^i-1p_j=n^1-(2^i-1)ν ,and in particular N_k-1=n^(1+ν)/2.For every 0≤ i≤ k-1 and every vertex u∈ A_i∖ A_i+1, define the pivot p(u)∈ A_i+1 as a vertex satisfying d_G(u,A_i+1)=d_G(u,p(u)) (note p(u) does not exist for u∈ A_k-1), and define the bunchB(u)={v∈ A_i :  d_G(u,v)<d_G(u,A_i+1)}∪{p(u)} . The hopset is created by taking H={(u,v) : u∈ V, v∈ B(u)}, where the length of the edge (u,v) is set as d_G(u,v). As argued in <cit.>, for any 0≤ i≤ k-2 and u∈ A_i∖ A_i+1, the size of B(u) is bounded by a random variable sampled from a geometric distribution with parameter p_i (this corresponds to the first vertex of A_i, when ordered by distance to u, that is included in A_i+1). Hence [|B(u)|]≤ 1/p_i=n^2^i·ν. For u∈ A_k-1 we have [|B(u)|]=N_k-1=n^(1+ν)/2. The expected size of the hopset H is at most∑_i=0^k-2(N_i· n^2^i·ν) +N_k-1· N_k-1= k· n^1+ν . The following lemma bounds the number of hops and stretch of H. Recall that d^(t)_G(u,v) is the length of the shortest-path between u,v in G that consists of at most t edges.Fix any 0<δ<1/(8k) and any x,y∈ V. Then for every 0≤ i≤ k-1, at least one of the following holds: * d_G∪ H^((3/δ)^i)(x,y)≤ (1+8δ i)· d_G(x,y).* There exists z∈ A_i+1 such that d_G∪ H^((3/δ)^i)(x,z)≤ 2d_G(x,y).The proof is by induction on i. We start with the basis i=0. If it is the case that y∈ B(x), then we added the edge (x,y) to the hopset, i.e. d_H^(1)(x,y)=d_G(x,y), and so the first item holds. Otherwise, consider the case that x∈ A_1: then we can take z=x, so the second item holds trivially. The remaining case is that x∈ A_0∖ A_1 and y∉ B(x), so by definition of B(x) we get that d_G(x,y)≥ d_G(x,A_1). By taking z=p(x), there is a single edge between x,z in H of length d_G(x,z)=d_G(x,A_1)≤ d_G(x,y), which satisfies the second item.Assume the claim holds for i, and we prove for i+1. Consider the path π(x,y) between x,y in G, and partition it into J≤ 1/δ segments {L_j=[u_j,v_j]}_j∈[J], each of length at most δ· d_G(x,y), and at most 1/δ edges {(v_j,u_j+1)}_j∈[J] of G between these segments. This can be done as follows: define u_1=x, and for j≥ 1, walk from u_j on π(x,y) (towards y) until the point v_j, which is the vertex so that the next edge will take us to distance greater than δ· d_G(x,y) from u_j (or until we reached y). By definition, d_G(u_j,v_j)≤δ· d_G(x,y). Define u_j+1 to be the neighbor of v_j on π(x,y) that is closer to y (if exists). If u_j+1 does not exist (which can happen only when v_j=y) then define u_j+1=y and J=j. Observe that for all 1≤ j≤ J-1, d_G(u_j,u_j+1)>δ· d_G(x,y), so indeed J≤ 1/δ.We use the induction hypothesis for all pairs (u_j,v_j) with parameter i. Consider first the case that the first item holds for all of them, that is, d_G∪ H^((3/δ)^i)(u_j,v_j)≤ (1+8δ i)· d_G(u_j,v_j). Then we take the path in G∪ H that consists of the (3/δ)^i-hops between each pair u_j,v_j, and the edges (v_j,u_j+1) of G. Since (3/δ)^i+1≥ (1/δ)·(3/δ)^i+1/δ, we haved_G∪ H^((3/δ)^i+1)(x,y)≤∑_j∈[J](d_G∪ H^((3/δ)^i)(u_j,v_j)+d_G^(1)(v_j,u_j+1))≤(1+8δ i)· d_G(x,y) . The second case is that there are pairs (u_j,v_j) for which only the second item holds. Let l∈ [J] (resp., r∈[J]) be the first (resp., last) index for which the first item does not hold for the pair (u_l,v_l) (resp., (u_r,v_r)). Then there are z_l,z_r∈ A_i+1 such thatd_G∪ H^((3/δ)^i)(u_l,z_l)≤ 2d_G(u_l,v_l)  and  d_G∪ H^((3/δ)^i)(v_r,z_r)≤ 2d_G(u_r,v_r).(Note that we used v_r and not u_r in the second inequality.This can be done since the lemma's assertion holds for the pair (v_r,u_r) as well, and as the first item is symmetric with respect to u_r,v_r, it does not hold for the pair (v_r,u_r) as well.) Consider now the case that z_r∈ B(z_l). In this case we added the edge (z_l,z_r) to the hopset, and by the triangle inequality,d_H^(1)(z_l,z_r)=d_G(z_l,z_r)≤ d^((3/δ)^i)_G∪ H(u_l,z_l)+d_G(u_l,v_r)+d^((3/δ)^i)_G∪ H(z_r,v_r) .Next, apply the inductive hypothesis on segments {L_j} for j<l and j>r, and in between use the detour via u_l,z_l,z_r,v_r. Since l≤ r, there is at least one segment we skipped, so the total number of hops is bounded by (1/δ-1)·(3/δ)^i+1/δ+2(3/δ)^i+1. (The additive term of 1/δ accounts for the edges (v_j,u_j+1), 0 ≤ j ≤ J-1.) This expression is at most (3/δ)^i+1 whenever δ<1/2. It follows thatd_G∪ H^((3/δ)^i+1)(x,y)≤ ∑_j=1^l-1[d_G∪ H^((3/δ)^i)(u_j,v_j)+d_G^(1)(v_j,u_j+1)]+d^((3/δ)^i)_G∪ H(u_l,z_l)+  d_H^(1)(z_l,z_r) + d^((3/δ)^i)_G∪ H(z_r,v_r)+d_G^(1)(v_r,u_r+1)+ ∑_j=r+1^J[d_G∪ H^((3/δ)^i)(u_j,v_j)+d_G^(1)(v_j,u_j+1)](<ref>)≤(1+8δ i)d_G(x,u_l)+d_G(u_l,v_r)+(1+8δ i)d_G(v_r,y) +2d^((3/δ)^i)_G∪ H(u_l,z_l)+2d^((3/δ)^i)_G∪ H(z_r,v_r)(<ref>)≤ 8δ· d_G(x,y)+(1+8δ i)d_G(x,y)= (1+8δ(i+1))· d_G(x,y) .This demonstrates item 1 holds in this case. The final case to consider is that z_r∉ B(z_l). Assume first that z_l∉ A_i+2. Then taking z=p(z_l)∈ A_i+2, the definition of B(z_l) implies that d_G(z_l,z)≤ d_G(z_l,z_r). We now claim that item 2 holds for such a choice of z. Indeed, since (z_l,z)∈ H, we haved_G∪ H^((3/δ)^i+1)(x,z) ≤ ∑_j=1^l-1[d_G∪ H^((3/δ)^i)(u_j,v_j)+d_G^(1)(v_j,u_j+1)]+d^((3/δ)^i)_G∪ H(u_l,z_l)+d_H^(1)(z_l,z)≤ (1+8δ i)d_G(x,u_l)+d^((3/δ)^i)_G∪ H(u_l,z_l) + d_G(z_l,z_r)(<ref>)≤ (1+8δ i)d_G(x,u_l)+2d^((3/δ)^i)_G∪ H(u_l,z_l)+d_G(u_l,v_r)+ d^((3/δ)^i)_G∪ H(z_r,v_r)(<ref>)≤ (1+8δ i)d_G(x,v_r)+6δ· d_G(x,y)≤ 2d_G(x,y),where the last inequality used that δ<1/(8k). The case that z_l∈ A_i+2 is simpler, since we may take z=z_l. Fix any 0<ϵ<1, and apply the lemma on any pair x,y with δ=ϵ/(8k) and i=k-1. It must be that the first item holds (since A_k=∅). Hence we have thatd_G∪ H^(β)(x,y)≤ (1+ϵ)d_G(x,y) ,where the number of hops is given by β=(24k/ϵ)^k-1. We derive the following theorem. For any weighted graph G=(V,E) on n vertices, and any k≥ 1, there exists H of size at most O(k· n^1+1/(2^k-1)), which is a (β,ϵ)-hopset for any 0<ϵ<1 with β=O(k/ϵ)^k-1. §.§ Improved Hopset SizeHere we show how to remove the k factor from the hopset size, at the cost of increasing the exponent of β by an additive 1. Note that we may assume w.l.o.g that k≤loglog n-1, as for larger values of k, both β and the size of the hopset (which becomes O(kn)), grow with k. We will increase the number of sets by 1, and sample V=A'_0⊇ A'_1⊇…⊇ A'_k+1=∅ using the following probabilities: p'_i=n^-2^i·ν· 2^2^i-1 (the restriction on k ensures p'_i<1). Now for 0≤ i≤ k,N'_i:=[|A'_i|]=n·∏_j=0^i-1p'_j=n^1-(2^i-1)ν· 2^2^i-i-1 ,and in particular N'_k≤ 2^2^k-k≤ n^1/2. The expected size of H becomes at most∑_i=0^k-1(N'_i/p'_i) +N_k'· N_k'≤∑_i=0^k-1 (n^1+ν/2^i)+n≤ 3n^1+ν .The hopset construction and the stretch analysis in lem:main remains essentially the same. There is an additional sampled set now, and thusthe exponent of β grows by an additive 1.For any weighted graph G=(V,E) on n vertices and any k≥ 1, there exists H of size at most O(n^1+1/(2^k-1)), which is a (β,ϵ)-hopset for any 0<ϵ<1 with β=O(k/ϵ)^k. Since our construction is based on the <cit.> emulator construction, following their analysis we obtain an emulator with additive stretch that can have linear size.For any un-weighted graph G=(V,E) on n vertices and any k≥ 1, there exists an emulator H of size at most O(n^1+1/(2^k-1)), with additive stretch O(k· d^1-1/k) for pairs at distance d. §.§ Efficient ImplementationWe consider the construction and notation of sec:smaller-size, with the slightly stronger assumption k≤loglog n-2.It was (implicitly) shown in <cit.> that for any 0≤ i< k, the sets B(u) (and the corresponding distances) for all u∈ A'_i∖ A'_i+1 can be computed in expected time O(|E|+nlog n)/p'_i. (In fact, in <cit.>, the set B(u) contains more vertices, not only those in A'_i.However, we can remove the extra vertices easily.) The running time becomes larger as i grows, and in order to keep it under control, we use the method of <cit.>: introduce a parameter 2 ν <ρ<1, and redefine the probabilities as follows. Set i_0=⌊log(ρ/ν)⌋ and i_1=i_0+1+⌈1/ρ⌉. For 0≤ i≤ i_0, let p_i'=n^-2^i·ν· 2^2^i-1 as in Section <ref>. Set also p'_i_0+1=n^-ρ/2, and for the remaining levels i_0+2≤ i≤ i_1, set p_i'=n^-ρ. Finally, let A'_i_1+1=∅. Note that for 0≤ i≤ i_0+1, we have thatN'_i=n∏_j=0^i-1p'_j=n^1-(2^i-1)ν· 2^2^i-i-1 .In particular, using that ρ/ν≤ 2^i_0+1≤ 2ρ/ν, we getN'_i_0+1≤ n^1-(ρ/ν-1)ν· 2^2ρ/ν≤ n^1+ν-ρ/2 .(The last inequality uses that k≤loglog n-2. Thus 1/ν=(2^k-1)≤log n/4, and so that 2^2ρ/ν≤ n^ρ/2.) Thus for any i≥ i_0+2 we see thatN'_i= N'_i_0+1∏_j=i_0+1^i-1p'_j(<ref>)≤ n^1+ν-ρ/2· n^-ρ/2· n^-(i-1-(i_0+1))·ρ=n^1+ν· n^-(i-(i_0+1))·ρ . The expected number of edges inserted until phase i_0+1 is at most∑_i=0^i_0N'_i/p'_i(<ref>)≤∑_i=0^i_0n^1-(2^i-1)ν· 2^2^i-i-1· n^2^i·ν/2^2^i-1=∑_i=0^i_0n^1+ν/2^i≤ 2n^1+ν .The expected number of edges at phase i_0+1 is bounded byN'_i_0+1/ p'_i_0+1(<ref>)≤ n^1+ν-ρ/2· n^ρ/2=n^1+ν .The remaining phases until i_1 introduce at most∑_i=i_0+2^i_1-1N'_i/p'_i(<ref>)≤∑_i=i_0+2^i_1-1n^1+ν· n^-(i-(i_0+1))·ρ· n^ρ≤ n^1+ν·∑_i=0^∞ n^-iρ≤ 2n^1+ν ,as this summation converges. Finally, sinceN'_i_1≤ n^1+ν· n^-(i_1-(i_0+1))·ρ≤ n^ν ,the last phase i_1 contributes at most N'_i_1· N'_i_1≤ n^2ν≤ n^1+ν edges to the hopset. We conclude that the total number of edges is O(n^1+ν).Recall that the expected running time of the Dijkstra explorations at level i<i_1 is O(|E|+nlog n)/p'_i. Thus the expected running time of the first i_0 levels converges to O(|E|+nlog n)· n^ρ, while each of the at most ⌈ 1/ρ⌉+1 remaining levels will take also O(|E|+nlog n)· n^ρ time. The final level is expected to take O(|E|+nlog n)· n^ν as well, since there are expected O(n^ν) vertices at A_i_1 from which Dijkstra is performed.The price we pay is in a higher number of sets, which increases the exponent of β by at most an additive 1/ρ. The following result summarizes the discussion. For any weighted graph G=(V,E) on n vertices, and any 1< k≤loglog n - 2, 0<ρ<1, there is a randomized algorithm that runs in expected time O(|E|+nlog n)· n^ρ/ρ, and computes an edge set H of size at most O(n^1+1/(2^k-1)).This edge set His a (β,ϵ)-hopset for any 0<ϵ<1, whereβ=O(k+1/ρ/ϵ)^k+1/ρ+1 .By substituting κ=2^k-1, we obtain an improved version of the hopsets of <cit.>, where both the size of the hopset and the running time are smaller by a factor of log n, while the other parameters remain the same. Another notable advantage is that it yields a single hopset which works for all 0<ϵ<1 simultaneously.For any weighted graph G=(V,E) on n vertices, and any κ≥ 1, 0<ρ<1, there is a randomized algorithm that runs in expected time O(|E|+nlog n)· n^ρ/ρ, and computes H of size at most O(n^1+1/κ), which is a (β,ϵ)-hopset for any 0<ϵ<1, whereβ=O(logκ+1/ρ/ϵ)^logκ+1/ρ+1 . § DISTRIBUTED MODELS We will consider two standard models in distributed computing: the Congested Clique model, and the CONGEST model. In both models every vertex of an n-vertex graph G = (V,E) hosts a processor, and the processors communicate with one another in discrete rounds, via short messages. Each message is allowed to contain an identity of a vertex, an edge weight, a distance in the graph, or anything else of no larger (up to a fixed constant factor) size.[Typically, in the CONGEST model only messages of size O(log n) bits are allowed, but edge weights are restricted to be at most polynomial in n. Our definition is geared to capture a more general situation, when there is no restriction on the aspect ratio. Hence results achieved in our more general model are more general than previous ones.] The local computation is assumed to require zero time, and we are interested in algorithms that run for as few rounds as possible. In the Congested Clique model, we assume that all vertices are interconnected via direct edges, while in the CONGEST model, every vertex can send messages only to its G-neighbors (the weight of edges is irrelevant to the communication time). §.§ Congested Clique ModelWe first show how to construct the hopset in the Congested Clique model. In order to avoid a high number of rounds when computing distances for determining the bunches {B(u)}, we built the hopset in log n levels, where each level ℓ hopset will only "take care" of pairs that have a shortest path with at most 2^ℓ hops. This is somewhat different from previous works <cit.>, in which the level ℓ hopset handled pairs with distance in the range [2^ℓ,2^ℓ+1]. A few advantages of our current approach: it easily avoids the dependency on the ratio between largest and smallest distance, and also the final hopset is just the level log n one, so we can obtain a linear size hopset (unlike previous works which took the union of all levels). There are a few technical difficulties in implementing the algorithm of sec:hop in a distributed setting. The first is that the <cit.> method for computing the bunches B(u) was to compute their "inverses" – called clusters.[The cluster C(v) is defined as follows: each point u∈ C(v) iff v∈ B(u).] Alas, it is not known how to compute these clusters in a distributed manner when errors are allowed. Rather, we compute the bunches directly, and to avoid the potential large congestion (a vertex may be a part of many bunches, and needs to send messages for all of them), we replace the bunches with half-bunches (i.e., taking only points closer than half the distance to the pivot). See below for the formal definition. The second issue (of congestion) is more subtle, and arises since hop-bounded distances do not obey the triangle inequality. For the stretch analysis to go through, we need that the weight of the hopset edges will be bounded by a certain path between the end-points of the edge (see (<ref>)). In order to ensure that this happens, we build each hopset for level ℓ gradually, i.e. the bunches are created first for A_0∖ A_1, then for A_1∖ A_2 and so on, where each time the partial hopset is added to the graph on which we compute distance. We say that H is a (β,ϵ,t)-hopset if G∪ H provides (1+ϵ) approximation with at most β hops for all pairs x,y∈ V such that d_G(x,y)=d_G^(t)(x,y) (i.e., the pairs that have a shortest path consisting of at most t edges between them). Note that the empty set is a (1,0,1)-hopset (and thus also a (β,ϵ,1)-hopset for all β≥ 1 and ϵ>0). Given a (β,ϵ_ℓ-1,2^ℓ-1)-hopset H^(ℓ-1), we build a (β,ϵ_ℓ,2^ℓ)-hopset H^(ℓ), where 1+ϵ_ℓ=(1+ϵ)^ℓ for some 0<ϵ<1/(5log n). The final hopset will be H^(log n). (We stress that the previous hopsets H^(1),…,H^(ℓ-1) are only used to compute H^(ℓ), and are not contained in it.) Observe that H^(ℓ-1) is a (2β,ϵ_ℓ-1,2^ℓ)-hopset, since every path with at most 2^ℓ hops can be partitioned into two paths of at most 2^ℓ-1 hops each, and H^(ℓ-1) provides a 1+ϵ_ℓ-1 approximation with β hops for each of these. It follows that for any x,y∈ V,d_G∪ H^(ℓ-1)^(2β)(x,y)≤(1+ϵ_ℓ-1)· d_G^(2^ℓ)(x,y) . We sample sets V=A_0⊇ A_1⊇…⊇ A_k'+1=∅ as in sec:implem, where k'=i_1≤ k+1/ρ+1 is the number of sets.We introduce a subtle change to the construction – in the previous section we defined for each u∈ A_i∖ A_i+1 a set B(u), and added the edges (u,v) for all v∈ B(u), simultaneously for all 0≤ i≤ k'. Here we shall build the hopset H=H^(ℓ) gradually: For each i=0,1,…,k' we define a set of edges H_i=H_i^(ℓ) corresponding to the bunches of vertices in V∖ A_i+1, and finally take H=H_k'.Fix some 0≤ i≤ k', and assume we built the set H_i-1 (when i=0 define H_-1=∅). We shall work in the graph G_i, defined byG_i=G∪ H^(ℓ-1)∪ H_i-1 .The algorithm consists of two stages. On the first stage, for 8β rounds, run a Bellman-Ford exploration in G_i rooted at A_i+1, to obtain for each u∈ V the value d̂(u,A_i+1)=d_G_i^(8β)(u,A_i+1). (If some vertex v∈ V was not found in the exploration, then we set d̂(v,A_i+1)=∞.) Also for each vertex u∈ A_i∖ A_i+1 with d̂(u,A_i+1)<∞, store p(u) as a vertex p∈ A_i+1 satisfying d̂(u,A_i+1)=d_G_i^(8β)(u,p).To determine the bunches (this is the second stage), each vertex u∈ A_i∖ A_i+1 conducts another Bellman-Ford exploration in the graph G_i rooted at u, this time for only 4β rounds, i.e., half the number of hops 8β of the first exploration, to distance less than d̂(u,A_i+1)/2 (i.e., the messages whose origin is u contain the value d̂(u,A_i+1), and only vertices within this distance from u forward the message in the next round). Define the half-bunch B(u)={v∈ A_i :  d_G_i^(4β)(u,v)<d̂(u,A_i+1)/2}∪{p(u)}. LetH_i=H_i-1∪⋃_u∈ A_i∖ A_i+1{(u,v) : v∈ B(u)} ,and set the weight of the edge (u,v) as the distance discovered in the exploration (i.e., d_G_i^(4β)(u,v) for v∈ B(u) and d_G_i^(8β)(u,p(u)) for the pivot).[|H|]≤ O(n^1+ν). The argument is essentially the same as the one in sec:implem. The only difference is that when analyzing in step 0≤ i<k' the expected size of a bunch, i.e., [|B(u)|] for some u∈ A_i∖ A_i+1, we consider the ordering on V given by the 4β-bounded distance from u in the graph G_i, i.e., according to d_G_i^(4β)(u,·). Then the size of B(u) is bounded by the index of the first vertex in this ordering that is included in A_i+1. Since every v∈ A_i is included in A_i+1 independently with probability p_i', we have that [|B(u)|]≤ 1/p_i'. In fact, B(u) may have a smaller size than the first index included in A_i+1, since we use more hops for computing distances to pivots (which reduces the distance threshold for being in B(u)), and since we only take into the bunch points that are less than half the distance to the pivot. Finally, for the last level k' we have that for u∈ A_k', [|B(u)|]≤[|A_k'|](<ref>)≤ n^ν. Combining this with bounds on N'_i=[|A_i|] in sec:implem we can bound the size of H in the same manner.In fact, since |B(u)| is stochastically bounded by a geometric distribution with parameter p'_i≥ n^-ρ, it follows that with high probability for all v∈ V,|B(v)|≤ 4n^ρ·ln n .The number of rounds required is whp O(n^ρ· k'·log^2n·β). The sampling of the sets A_i is done independently for each vertex, therefore it requires no communication. For each 1≤ℓ≤log n and 0≤ i< k', we conduct a single Bellman-Ford exploration in G_i rooted at A_i+1 for 8β rounds. Since in the Congested Clique model all edges are present, this requires O(β) rounds per exploration (every vertex sends just a single message to all its neighbors every round). The more expensive step is the explorations to range 4β rooted at each u∈ A_i∖ A_i+1. The number of rounds in these explorations is affected by the number of messages a vertex v∈ V needs to forward to its neighbors at each round. In what follows we prove that with high probability this number is at most O(n^ρlog n) for every v∈ V.Fix 0≤ i<k'. Consider v∈ V, and order the vertices of A_i according to their 4β-bounded distance to v in G_i, that is, according to d_G_i^(4β)(v,·). Since p_i'≥ n^-ρ, the probability that none of the first 2n^ρln n vertices in that ordering is sampled to A_i+1 is at most(1-n^-ρ)^2n^ρln n≤ 1/n^2 .So by the union bound on the n vertices, with high probability, for each v∈ V at least one of the first 2n^ρln n vertices in its ordering of A_i is sampled to A_i+1. Denote by z∈ A_i+1 the first vertex in the ordering of v that was chosen to A_i+1. We claim that no vertex u∈ A_i, that appears after z in the ordering of v, will cause v to forward messages concerning B(u). This is because in the first stage we performed the Bellman-Ford exploration rooted at A_i+1 for 8β rounds. Thusd̂(u,A_i+1)≤ d_G_i^(8β)(u,z)≤ d_G_i^(4β)(u,v)+d_G_i^(4β)(v,z)≤ 2d_G_i^(4β)(u,v) ,where the last inequality uses the assumption that u appeared after z in v's ordering. We obtained d_G_i^(4β)(u,v)≥d̂(u,A_i+1)/2. So by the definition of half bunch, v∉ B(u) and thus, vwill not forward u's messages.We still have to argue about the last level i=k' (since no vertex is chosen to A_k'+1). Recall that the expected size of A_k' is bounded by n^ν, as shown in (<ref>). It can be easily checked that whp |A_k'|≤ O(n^ν·log n)≤ O(n^ρ·log n). (Recall that ρ≥ 2ν.) We conclude that whp, every vertex needs to send at most O(n^ρlog n) messages to implement a single step of Bellman-Ford. There are O(β) rounds for each ℓ=1,…,log n and each 0≤ i≤ k', so the total number of rounds required is O(n^ρ· k'·log^2n·β). Next, we prove an analogue of lem:main for the distributed setting. There are several subtle differences described in the beginning of this section. So we provide a complete proof that addresses these subtleties. Fix any 0<δ<1/(15k'), set β=(3/δ)^k', and let x,y∈ V be such that d_G(x,y)=d_G^(2^ℓ)(x,y). Then for every 0≤ i≤ k', at least one of the following two assertions holds: * d_G∪ H_i^((3/δ)^i)(x,y)≤ (1+ϵ_ℓ-1)·(1+12δ i)· d_G(x,y).* There exists z∈ A_i+1 such that d_G∪ H_i^((3/δ)^i)(x,z)≤ 3 d_G(x,y).The proof is by induction on i. We start with the base case i=0. If it is the case that x∈ A_1 then we can take z=x and the second item holds trivially. Otherwise, consider the case that x∈ A_0∖ A_1 and y∈ B(x). Then we added an edge (x,y) to H_0 of weight d_G_0^(4β)(x,y) (the case that y=p(x) is similar, replacing 4β by 8β). Recall that G_0=G∪ H^(ℓ-1). Henced_H_0^(1)(x,y)= d_G_0^(4β)(x,y) ≤ d_G_0^(2β)(x,y) (<ref>)≤ (1+ϵ_ℓ-1)· d_G^(2^ℓ)(x,y) =(1+ϵ_ℓ-1)· d_G(x,y)  ,so the first item holds. The last case is that x∈ A_0∖ A_1 and y∉ B(x). By definition of H_0, it must be thatd̂(x,A_1)≤ 2d_G_0^(4β)(x,y) .Since p(x)∈ B(x), theedge (x,p(x)) is in the hopset, and itsweight isd^(8β)_G_0(x,p(x)) = d̂(x,A_1)(<ref>)≤ 2d_G_0^(4β)(x,y)≤ 2(1+ϵ_ℓ-1)· d_G(x,y)<3d_G(x,y) ,where for the last two inequalitieswe again used (<ref>) and the fact that 1+ϵ_ℓ-1< 3/2. (The latter holds since we assume ϵ<1/(5log n) and ℓ≤log n, so (1+ϵ_ℓ)=(1+ϵ)^ℓ<e^1/5). This proves the second item with z=p(x)∈ A_1.Assume the claim holds for i, and we prove for i+1. Consider the shortest path π(x,y) between x,y in G that contains at most 2^ℓ edges, and partition it into J ≤ 1/δ segments {L_j=[u_j,v_j]}_j∈[J] as in the proof of lem:main. We use the induction hypothesis for all pairs (u_j,v_j) with parameter i. (By the virtue of lying on a shortest path that has at most 2^ℓ edges, all these pairs satisfy d_G^(2^ℓ)(u_j,v_j)=d_G(u_j,v_j)). Consider first the case that the first item holds for all of them, that is, d_G∪ H_i^((3/δ)^i)(u_j,v_j)≤ (1+ϵ_ℓ-1)·(1+12δ i)· d_G(u_j,v_j). Then we take the path in G∪ H_i that consists of the (3/δ)^i-hops between each pair u_j,v_j, and the edges (v_j,u_j+1) of G. Since by (<ref>),H_i⊆ H_i+1,we haved_G∪ H_i+1^((3/δ)^i+1)(x,y)≤∑_j∈[J](d_G∪ H_i^((3/δ)^i)(u_j,v_j)+d_G^(1)(v_j,u_j+1))≤(1+ϵ_ℓ-1)·(1+12δ i)· d_G(x,y) ,which concludes the proof for the first case. The second case is that there are pairs (u_j,v_j) for which only the second item holds. Let l∈ [J] (resp., r∈[J]) be the first (resp., last) index for which the first item does not hold for the pair (u_l,v_l) (resp., (u_r,v_r)). Then there are z_l,z_r∈ A_i+1 such thatd_G∪ H_i^((3/δ)^i)(u_l,z_l)≤ 3d_G(u_l,v_l)  and   d_G∪ H_i^((3/δ)^i)(v_r,z_r)≤ 3d_G(u_r,v_r) .Consider first the case that z_l∈ A_i+2. Then we take z=z_l, and derived_G∪ H_i+1^((3/δ)^i+1)(x,z) ≤ ∑_j=1^l-1(d_G∪ H_i^((3/δ)^i)(u_j,v_j)+d_G^(1)(v_j,u_j+1))+d_G∪ H_i^((3/δ)^i)(u_l,z_l)(<ref>)≤ (1+ϵ_ℓ-1)·(1+12δ i)· d_G(x,u_l) + 3d_G(u_l,v_l)≤3d_G(x,y) ,where in the second inequality we used that the first item holds for all intervals until the l-th one, and in the final one that 1+ϵ_ℓ-1<3/2 and 1+12δ i<2.From now on assume z_l∈ A_i+1∖ A_i+2. Recall that the Bellman-Ford explorations that constructedH_i+1 were conducted in the graph G_i+1=G∪ H^(ℓ-1)∪ H_i. These explorations were conducted to hop-depth 8β on the first stage, and 4β on the second. This allows us to provide the following bound:d_G_i+1^(4β)(z_l,z_r)≤ d_G∪ H_i^(β)(z_l,u_l)+d_G∪ H^(ℓ-1)^(2β)(u_l,v_r)+d_G∪ H_i^(β)(v_r,z_r)(<ref>)≤ d_G∪ H_i^((3/δ)^i)(z_l,u_l)+(1+ϵ_ℓ-1)· d_G(u_l,v_r)+d_G∪ H_i^((3/δ)^i)(v_r,z_r) .Here the first inequality follows by the triangle inequality, the second uses that (3/δ)^i≤β,that u_l,v_r lie on a shortest path with at most 2^ℓ hops, and that H^(ℓ-1) is a (2β,_ℓ-1,2^ℓ) hopset.Consider the case that z_r∈ B(z_l), then we have a hopset edge (z_l,z_r) that was introduced in H_i+1. In particular, since we used 4β steps in the exploration from z_l, we have thatd_H_i+1^(1)(z_l,z_r)= d_G_i+1^(4β)(z_l,z_r)(<ref>)≤d_G∪ H_i^((3/δ)^i)(z_l,u_l)+(1+ϵ_ℓ-1)· d_G(u_l,v_r)+d_G∪ H_i^((3/δ)^i)(v_r,z_r) .Next, apply the inductive hypothesis on segments {L_j} for j<l and j>r, and in between use the detour via u_l,z_l,z_r,v_r. Since there are at most 1/δ-1 intervals for which we use the first item in the inductive hypothesis, the total number of hops we will need is at most (1/δ-1)·(3/δ)^i+1/δ+2(3/δ)^i+1. This is at most (3/δ)^i+1 whenever δ<1/2. It follows thatd_G∪ H_i+1^((3/δ)^i+1)(x,y)≤ ∑_j=1^l-1[d_G∪ H_i^((3/δ)^i)(u_j,v_j)+d_G^(1)(v_j,u_j+1)]+d^((3/δ)^i)_G∪ H_i(u_l,z_l)+ d_H_i+1^(1)(z_l,z_r)  + d^((3/δ)^i)_G∪ H_i(z_r,v_r)+d_G^(1)(v_r,u_r+1)+ ∑_j=r+1^J[d_G∪ H_i^((3/δ)^i)(u_j,v_j)+d_G^(1)(v_j,u_j+1)](<ref>)≤(1+ϵ_ℓ-1)·[(1+12δ i)· d_G(x,u_l)+d_G(u_l,v_r)+(1+12δ i)· d_G(v_r,y)]  +2d^((3/δ)^i)_G∪ H_i(u_l,z_l)+2d^((3/δ)^i)_G∪ H_i(z_r,v_r)(<ref>)≤ (1+ϵ_ℓ-1)·(1+12δ i)· d_G(x,y)+12δ· d_G(x,y)≤ (1+ϵ_ℓ-1)·(1+12δ(i+1))· d_G(x,y) .In the penultimate inequality we used that both d_G(u_l,v_l),d_G(u_r,v_r)≤δ· d_G(x,y). This demonstrates that item 1 holds in this case.The final case to consider is that z_r∉ B(z_l) (and z_l∈ A_i+1∖ A_i+2). Let z=p(z_l)∈ A_i+2. Since z_r∈ A_i+1, the definition of B(z_l) implies thatd^(1)_H_i+1(z_l,z)=d̂(z_l,A_i+2)≤ 2d_G_i+1^(4β)(z_l,z_r) .(Recall that G_i+1 = G ∪ H^(ℓ-1)∪ H_i.)We now claim that item 2 holds for such a choice of z. Indeed, by (<ref>), we have3 · d_G ∪ H_i^(3/δ)^i(u_l,z_l) + 2 · d_G ∪ H_i^(3/δ)^i(v_r,z_r)  ≤  15 · d_G(x,y) .Hence,d_G∪ H_i+1^((3/δ)^i+1)(x,z) ≤ ∑_j=1^l-1[d_G∪ H_i^((3/δ)^i)(u_j,v_j)+d_G^(1)(v_j,u_j+1)]+d^((3/δ)^i)_G∪ H_i(u_l,z_l)+d^(1)_H_i+1(z_l,z)(<ref>)∧(<ref>)≤ (1+ϵ_ℓ-1)·(1+12δ i)· d_G(x,u_l)+d^((3/δ)^i)_G∪ H_i(u_l,z_l) + 2·[d_G∪ H_i^((3/δ)^i)(z_l,u_l)+(1+ϵ_ℓ-1)· d_G(u_l,v_r)+d_G∪ H_i^((3/δ)^i)(v_r,z_r)](<ref>)≤ (1+ϵ_ℓ-1)·(1+12δ i)· d_G(x,u_l) +15δ· d_G(x,y) +2(1+ϵ_ℓ-1)· d_G(u_l,v_r)≤3d_G(x,y) ,where the last inequality we used that δ<1/(15k), k≥ 2 and 1+ϵ_ℓ-1<e^1/5<5/4, so that both (1+ϵ_ℓ-1)·(1+12δ i)+15δ≤ 3 and 2(1+ϵ_ℓ-1)+15δ≤ 3. Taking δ=ϵ/(15k') and picking i=k', the second item of lem:dist-main cannot hold for any x,y∈ V (because A_k'+1=∅), so we have for every x,y∈ V such that d_G^(2^ℓ)(x,y)=d_G(x,y) thatd_G∪ H^(ℓ)^(β)(x,y)≤ (1+ϵ_ℓ-1)·(1+ϵ)· d_G^(2^ℓ)(x,y)=(1+ϵ_ℓ)· d_G(x,y) .Recall that β = (3/δ)^k' = (45k'/)^k'. Rescaling ϵ'=ϵ/log n and taking ℓ=log n, we derive the following theorem. For any weighted graph G=(V,E) on n vertices, an integer k> 1, and parameters 0<ρ<1, 0<ϵ<1/5, there is a distributed algorithm in the Congested Clique model running in Õ(n^ρ·β) rounds, that computes a (β,ϵ)-hopset H of size at most O(n^1+1/(2^k-1)), whereβ=O((k+1/ρ)·log n/ϵ)^k+1/ρ+1 .We note that by (<ref>) and claim:mem, the memory requirement from every vertex is Õ(n^ρ). This is because the latter shows that this is a bound on the number of messages every vertex needs to send in each round, and the former indicates that whp storing B(v) for any v∈ V requires only so much space.We remark that one can achieve β independent of n by either applying the construction recursively, as we do in sec:pram for the parallel implementation, or by using an idea from <cit.>. We next describe the latter: fix a parameter t, and use the hopset H^(ℓ) to compute the hopset H^(ℓ+t); Since H^(ℓ) is also a (2^t·β,ϵ,2^ℓ+t)-hopset, we need explorations to range 2^t·β in order for an appropriate variant of (<ref>) to hold. There will be only (log n)/t levels until H^(log n) is built, so we gain a factor of t in β. We derive the following result.For any weighted graph G=(V,E) on n vertices, integers k> 1, t≥ 1, and parameters 0<ρ<1, 0<ϵ<1/5, there is a distributed algorithm in the Congested Clique model that runs in Õ(n^ρ·β· 2^t/t) rounds, and computes H of size at most O(n^1+1/(2^k-1)), which is a (β,ϵ)-hopset, whereβ=O((k+1/ρ)·log n/t·ϵ)^k+1/ρ .In particular, taking t=ρlog n and rescaling ρ'=2ρ, givesFor any weighted graph G=(V,E) on n vertices, an integer k> 1, and parameters 0<ρ<1/2, 0<ϵ<1/5, there is a distributed algorithm in the Congested Clique model that runs in Õ(n^ρ·β) rounds, and computes H of size at most O(n^1+1/(2^k-1)), which is a (β,ϵ)-hopset, whereβ=O(k+1/ρ/ρ·ϵ)^k+2/ρ . §.§ CONGEST Model Given a weighted graph G=(V,E,w) representing the network, in the CONGEST model we will be interested in a setting where there is a "virtual" graph G'=(V',E',w') embedded in G, i.e., V'⊆ V. We would like to construct a hopset for G'. It is motivated by distributed applications of hopsets for approximate shortest paths computation, distance estimation and routing <cit.>, which require a hopset for a virtual graph embedded in the underlying network in the above way.In a similar manner to <cit.>, we can modify our algorithm in the Congested Clique model to the CONGEST model.The following lemma provides a way to perform Bellman-Ford exploration using small memory. Let G”=(V',E'∪ H) be a virtual graph on m vertices embedded in a graph G=(V,E) of hop-diameter D, such that edges in E' correspond to B-bounded distances in G, and H has arboricity α (i.e., one can orient the edges of H to have out-degree at most α). Moreover, every vertex v' ∈ V' knows at most α its outgoing edges in H.Then one can compute β iterations of Bellman-Ford in G” in the CONGEST model within O(m·α+B+D)·β·log n rounds, so that every vertex requires only O(αlog n) memory.To implement a single iteration of the Bellman-Ford exploration, every vertex v∈ V', which holds a current distance estimate, will need to communicate it to its neighbors in G”. First, it will initiate an exploration in G for B rounds. In each round, every vertex u∈ V will forward the smallest value it received so far. This guarantees that if {v,w}∈ E', then w will receive v's message (or a smaller value). We now have to handle the edges of H.Let T be a spanning tree of G with hop-depth D. Every v∈ V' will broadcast via T its value to the entire graph, and will also send all the existing edges of H incident on it that v knows about. All vertices w∈ V' that know of a hopset edge {v,w} (or that learn about it from v's message) will update their value accordingly. Since there are O(m·α) messages, this can be done in O(m·α+D) rounds. In order to guarantee small internal memory, each v selects at random a number from {1,2,…,m·α} for each message it sends, as a round to start its broadcast (clearly this increases the number of rounds by at most m·α). Since each message of v will reach every vertex of T at most once, the probability that some u∈ V receives t messages in a single round is at most m·α t· 1/(m·α)^t≤ (e/t)^t.Thus, with high probability, no vertex will receive more than O(log n) messages each round. By increasing the number of rounds by O(log n), whp there will be no congestion. The total number of rounds required is thus O(m·α+B+D)·β·log n. We now show how to use lem:BF-small to construct a hopset for G', in the setting where E' are edges corresponding to B=Õ(m)-bounded distances in G (without computing G' explicitly). Recall that in the ith iteration of constructing H=H^(ℓ), we have already built the previous hopset H^(ℓ-1) and the partial hopset H_i-1. Since we desire limited memory, every vertex v stores only the "outgoing" hopset edges, those to vertices in its bunch B(v). Recall that by (<ref>), whp |B(v)|≤ O(m^ρ·log n), for all v∈ V'.We work in the graph G_i=G'∪ H^(ℓ-1)∪ H_i-1. In order to implement the O(β)-bounded exploration rooted at A_i+1(the second stage of the ith iteration), we simply apply lem:BF-small on G_i with α=O(m^ρ·log n). The explorations from vertices of A_i∖ A_i+1 (the second stage of the ith iteration) are done in a similar manner. However, there is alarger congestion than in the first stage, due to the multiple sources of limited explorations. Recall that in the limited exploration whose origin is v∈ A_i∖ A_i+1, each intermediate node x∈ V' forwards the message iff its current estimate is strictly less than d̂(v,A_i+1)/2 (this value is part of the message v sends). We enforce the exact same rule for vertices u∈ V as well. If a message concerning v should pass in G' from x to its neighbor y, then all vertices on the B-bounded path in G that implements the edge (x,y)∈ E' will have estimates smaller than that of y, therefore will forward the message on. In the proof of claim:mem we saw that each x∈ V' participates whp in at most O(m^ρ·log n) explorations for each iteration i. The argument is identical for u∈ V as well, so the congestion induced in the first stage of lem:BF-small (the exploration in G for B rounds) by multiple sources is only O(m^ρ·log n). Note that in the second phase (broadcasting the edges of H), the number of messages increases to O(m·α+m· m^ρ·log n).Thus, the total number of rounds required is still Õ(m^1+ρ+D)·β. We summarize the discussion with the following result. For any weighted graph G=(V,E) with hop-diameter D, an integer k> 1, and parameters 0<ρ<1, 0<ϵ<1/5, and (an implicit) virtual graph G'=(V',E') embedded in G on |V'|=m vertices, there is a distributed algorithm in the CONGEST model that runs in Õ(m^1+ρ+D)·β rounds, that computes H, which is a (β,ϵ)-hopset for G', of size at most O(m^1+1/(2^k-1)), whereβ=O((k+1/ρ)·log m/ϵ)^k+1/ρ+1 .In the case that E' corresponds to B=Õ(m)-bounded distances in G, the hopset can be computed where every vertex has internal memory Õ(m^ρ).Path-reporting Hopsets:Every hopset edge is implemented via some path in G. For our application to routing, we would like that every vertex on a path implementing a certain hopset edge will be aware of this hopset edge. This means that for every hopset edge (x,y)∈ H, there exists a path P in G of length w_H(x,y), and every vertex u∈ P knows about the hopset edge, and the distances d_P(u,x), d_P(u,y), and its neighbors on P. It was shown in <cit.> how to adapt the Bellman-Ford exploration, so that paths information can be stored as well, at a cost of increasing the size of messages by a factor of O(β). However, there was no guarantee on the number of hopset edges a vertex u∈ V can be a part of, which can be devastating when one desires small memory per vertex. We describe now an approach that eliminates the need for the message's size increase, and also ensures that each vertex belongs to a bounded number of paths that implement hopset edges. The issue that may cause a vertex to be in a path for many hopset edges, is that we use previous hopsets to construct a new one. Then the vertices implementing paths in these previous hopsets may not be discovered by the current explorations. So the argument of claim:mem bounding the number of explorations that visit a certain vertex does not apply as is.In order to guarantee that every u∈ V will need to store information for only Õ(m^ρ) hopset edges, we need to slightly change the construction. First, we will define H^(ℓ)=H_k'∪ H^(ℓ-1), so that every hopset will contain all the previous hopsets. (Recall that in our algorithm in Section <ref> that computes non-path-reporting hopsets,we only used lower-scale hopsets to compute a higher-scale one. Once the mission of lower-scale hopsets was completed, they were ruthlessly erased.) Second, rather than performing the exploration from A_i+1 in 8β steps, we apply lem:BF-small with 8β· k'·log n+1 steps of Bellman-Ford. Note that in the proof of correctness we only used that there are at least 8β steps. Using more steps will only increase the number of rounds (by a poly-logarithmic factor). Recall that when computing the hopset H^(ℓ) at phase i, we have already computed H_i-1, and work in the graph G_i=G'∪ H^(ℓ-1)∪ H_i-1. We can now argue that whp, there will not be too many hopset edges whose path in G contains u. The intuition is that the exploration from A_i+1 has sufficiently many hops in order to discover this u, and so an argument similar to the one of claim:mem will apply.Fix u∈ V, and order the vertices of A_i in increasing order according to their distance to u, where the distance from v∈ A_i to u is the shortest path consisting of at most 4β· k'·log n edges of G_i and then at most B edges of G. Let z be the first vertex in that order that is included in A_i+1. We claim that the vertexu cannot belong to a path P that implements a hopset edge (x,y), such that x∈ A_i∖ A_i+1 is after z in the ordering of u.Consider how the path P is built. One can initially start with Q={(x,y)}, and then recursively replace the hopset edge in Q that contains u, with the 4β-bounded path in some G'_j that induces it. Note that this recursion depth is at most k'log n, thus Q has at most 4β· k'log n edges. Since G_i contains all the edges of all previous hopsets, the exploration from A_i+1 starting at z for 8β· k'·log n+1 steps would have reached u after 4β· k'·log n+1 edges of G_i (the B edges of G are an edge of G', and thus of G_i as well), and then after additional 4β· k'·log n edges of G_i, it would have surely have reached y (because z is closer to u than x). We conclude thatd̂(y,A_i+1)≤ d_G_i^(8β· k'·log n+1)(y,z)≤ d_P(y,u)+d̂(u,A_i+1)≤ d_P(y,x)+d_P(u,x)≤ 2d_G_i^(4β)(x,y) ,which is a contradiction to the fact that y joins B(x).Next, we have to show that each u will indeed learn the relevant information on all hopset edges it implements. Assume inductively that for any hopset edge (x,y)∈ H^(ℓ-1)∪ H_i-1, if P is the path in G that implements this edge, then every u∈ P knows about the edge, d_P(u,x), d_P(u,y), and its neighbors on P. A new hopset edge (x,y)∈ H_i is created whenever the exploration rooted at x∈ A_i∖ A_i+1 discovers a vertex y∈ A_i. Recall that this exploration is done in G_i for 4β rounds. Whenever y joins B(x) it will send an acknowledgement on the 4β-bounded path back to x in G_i (every vertex discovered by x takes note of its "parent", the vertex who sent it the message of x). The acknowledgement phase can take place after the exploration concludes, and it will induce congestion that is no larger that the congestion created when sending the messages, so the number of rounds will at most double. Now, every vertex v in the 4β-bounded path from y to x that receives y's acknowledgement, knows that the edge to its parent v' is part of the path implementing the hopset edge (x,y). Recall that the edge (v,v') is either an edge of G', which is discovered via a B-round exploration in G – in which case all vertices along the path in G from v to v' can update the relevant information about (x,y) when v does a B-round exploration in G (this is the acknowledgement step), or otherwise (v,v')∈ H^(ℓ-1)∪ H_i-1. In the latter case, v will broadcast that the edge (v,v') implements (x,y), and its distances to x,y. By the induction hypothesis, each vertex u' that implements a path P' for the hopset edge (v,v') knows about it and its distances to v,v', thus when u' hears this broadcast (which is sent to all vertices of V), it knows it implements P, and can computes distances to x and y. We conclude that whp every vertex needs to store only the Õ(m^ρ) hopset edges that it implements. Note that the final hopset H^(log n) can omit all the previous hopsets (which were used only for calculations). We summarize this discussion with the following theorem.For any weighted graph G=(V,E) with hop-diameter D, an integer k> 1, and parameters 0<ρ<1, 0<ϵ<1/5, and (an implicit)virtual graph G'=(V',E') embedded in G on |V'|=m vertices, there is a distributed algorithm in the CONGEST model that runs in Õ(m^1+ρ+D)·β rounds, that computes H, which is a (β,ϵ) path-reporting hopset for G', of size at most O(m^1+1/(2^k-1)), whereβ=O((k+1/ρ)·log m/ϵ)^k+1/ρ+1 .In the case that E' corresponds to B=Õ(m)-bounded distances in G, the hopset can be computed where every vertex has internal memory Õ(m^ρ). § PRAM MODEL The algorithm described in sec:CC can be easily adapted to the PRAM model. For each ℓ=1,2,…,log n, we build the hopset H^(ℓ) based on the previous hopset H^(ℓ-1). Each of the O(β)-bounded Bellman-Ford explorations for constructing H_i can be implemented in parallel in O(β) rounds, where the congestion of Õ(n^ρ) per vertex translates to extra work (rather than multiplying the number of rounds, as was the case indistributed models). Since there are log n values of ℓ, and k'≤ k+1/ρ+1 steps in each level, the number of rounds is only O((k+1/ρ)·log n·β). We have the following result.For any weighted graph G=(V,E) on n vertices, an integer k> 1, and parameters 0<ρ<1, 0<ϵ<1/5, there is a parallel algorithm running in O((k+1/ρ)·log n·β) rounds and has Õ(|E|· n^ρ) work, that computes H of size at most O(n^1+1/(2^k-1)), which is a (β,ϵ)-hopset, whereβ=O((k+1/ρ)·log n/ϵ)^k+1/ρ+1 . We can also apply the construction recursively: If H(1) is the hopset given by thm:pram-main with β_1 =β given in (<ref>), then apply the construction on the graph G∪ H(1), but only for levels ℓ up to ℓ_2=logβ_1, to obtain a hopset H(2). Since for any x,y∈ V we have d_G∪ H(1)^(β_1)(x,y)≤ (1+ϵ)d_G(x,y), then adding both H(1) and H(2) guarantees d_G∪ H(1)∪ H(2)^(β_2)(x,y)≤ (1+ϵ)^2d_G(x,y), where β_2=(3c·(k+1/ρ)·logβ_1/ϵ)^k+1/ρ+1, where c is the constant hidden by the O(·) notation in (<ref>). This bound follows because ϵ needs to be rescaled by 3ℓ_2=3logβ_1; the rescaling by logβ_1 is to compensate for the number of levels, and by 3 to reduce the error from (1+ϵ)^2 back to 1+ϵ. Continuing in this manner for the next level with ℓ_3=logβ_2 levels, we obtain in general a recursion for β_i+1=(3c·(k+1/ρ)·logβ_i/ϵ)^k+1/ρ+1, and it can be shown by induction that as long as log^(i)n≥ 3clog(k+1/ρ) we haveβ_i≤(8c·(k+1/ρ)^2·[log(3c(k+1/ρ)/ϵ)+log^(i)n]/ϵ)^k+1/ρ+1 .After at most t=log^*n iterations, we get that β_t=O((k+1/ρ)^2/ϵ)^(1+o(1))·(k+1/ρ). To summarize, this yields a hopset with constant parameter β that is computed in polylog(n) rounds.For any weighted graph G=(V,E) on n vertices, an integer k> 1, and parameters 0<ρ<1, 0<ϵ<1/5, there is a parallel algorithm running in O(((k+1/ρ)·log n/ϵ)^k+1/ρ+2) rounds and has Õ(|E|· n^ρ) work, that computes H of size at most O(n^1+1/(2^k-1)·log^*n), which is a (β,ϵ)-hopset, whereβ=O((k+1/ρ)^2/ϵ)^(1+o(1))·(k+1/ρ) . § DISTRIBUTED ROUTING WITH SMALL MEMORY Here we improve the results of <cit.>, and devise a compact routing scheme that can be efficiently implemented in a distributed network. The previous result of <cit.> provides, for any parameter k, a scheme with stretch 4k-5+o(1), labels of size O(klog^2n) and routing tables of size O(n^1/klog^2n).The computation time of this scheme is(n^1/2+1/k+D)·min{(log n)^O(k),2^Õ(√(log n))} rounds (in the CONGEST model). One drawback of this result (and also of <cit.>, which obtained slightly weaker results), is that although the final memory requirement from each vertex is Õ(n^1/k), the preprocessing step requires high memory (at least Ω(√(n))). Indeed, some of the classical works on compact routing schemes <cit.> addressed the issue of each vertex having only a limited memory throughout the construction of the routing scheme (albeit their round complexity was at least linear in n). Here we present a distributed construction that has that desirable property, and in addition we improve both the label and table size by a logarithmic factor, almost matching the best known bounds of <cit.> that are computed in a sequential manner.We briefly sketch the approach of <cit.>, and the current improvement allowing low memory and improved bounds. First, construct the Thorup-Zwick hierarchy V=A_0⊇ A_1⊇ A_k=∅, where each vertex in A_i-1 is sampled to A_i independently with probability n^-1/k. Then the cluster C(v)={u∈ V : d_G(u,v)< d_G(u,A_i+1)} for v∈ A_i∖ A_i+1 can be viewed as tree rooted at v. Computing this cluster is done by a limited Dijkstra exploration from v, i.e., only vertices in C(v) continue the exploration of v. Routing from x to y is done by finding an appropriate cluster C(v) containing both x,y, and routing in that tree. Whenever i<k/2, these trees have whp depth Õ(√(n)). Hence they can be easily computed in a distributed manner within Õ(n^1/2+1/k) rounds. The main issue is computing the clusters for i≥ k/2.The method of <cit.> was to work with a virtual graph G', whose vertices are V'=A_k/2, and whose edges correspond to B=c·√(n)log n-bounded distances in G between the vertices of V'. Then a hopset is computed for this virtual graph, which enables the computation of Bellman-Ford explorations in only O(β) rounds. The fact that β-bounded distances can suffer 1+ϵ stretch creates additional complications; one needs to define approximate clusters, and make sure that these approximate clusters correspond to actual trees in G. Finally, since the trees corresponding to C(v) for the high level vertices v∈ A_i, i≥ k/2, can have large depth, one needs to adapt the Thorup-Zwick routing scheme for trees <cit.>. In both <cit.> this adaptation induced a logarithmic factor to both the table and the label size.Our improved result has two main ingredients. First, we do not explicitly construct G'; In both <cit.>, computing the weights of edges in G' was a rather expensive step, and required large memory and induced a factor depending logarithmically on the aspect ratio to the running time. In addition, only approximate values were obtained. We observe that not all the edges of G' are required for the algorithm, and thus we do not compute G' at all. Rather we compute only those edges of G' that are really needed for either the hopset or for the routing hierarchy. (This idea is reminiscent of <cit.>, where the virtual graph is also never entirely computed.)Instead, we conduct the explorations in G' by implementing in each iteration a B-bounded search in G, which not only saves memory and running time, but also simplifies the analysis, since now there is no error in the edge weights of G'. Second, our new tree-routing scheme has both improved label and routing table size, and can be computed with small memory.(For more details, see sec:tree-route.)Our result is summarized below. Let G=(V,E) be a weighted graph with n vertices and hop-diameter D, and let k>1 be a parameter. Then there exists a routing scheme with stretch at most 4k-5+o(1), labels of size O(klog n) and routing tables of size O(n^1/klog n), that can be computed in a distributed manner within (n^1/2+1/k+D)· (log n)^O(k) rounds, such that every vertex has memory of size Õ(n^1/k).Alternatively, whenever k≥√(log n/loglog n), the number of rounds can be made (n^1/2+1/k+D)· 2^Õ(√(log n)) with memory 2^Õ(√(log n)) at each vertex. In particular, taking k=δlog n/loglog n for a small constant δ yields (n^1/2+1/k+D)· n^O(δ) rounds with (n) memory per vertex.Construction of Routing Scheme. Let G=(V,E) be a weighted graph, fix k> 1. Sample a collection of sets V=A_0⊇ A_1…⊇ A_k=∅, where for each 0<i<k, each vertex in A_i-1 is chosen independently to be in A_i with probability n^-1/k. A point z∈ A_i is called an i-pivot of v if d_G(v,z)=d_G(v,A_i). The cluster of a vertex u∈ A_i∖ A_i+1 is defined asC(u)={v∈ V :  d_G(u,v)<d_G(v,A_i+1)} .It was shown in <cit.> thatWith high probability, each vertex is contained in at most 4n^1/klog n clusters. We recall a few definitions from <cit.>. For each v∈ V and 0≤ i≤ k-1, a point ẑ∈ A_i is called an approximate i-pivot of v ifd_G(v,ẑ)≤ (1+ϵ)d_G(v,A_i) .DefineC_ϵ(u)={v∈ V :  d_G(u,v)<d_G(v,A_i+1)/1+ϵ}.The approximate cluster C̃(u) will be any set that satisfies the following:C_6ϵ(u)⊆C̃(u)⊆ C(u) .It was shown in <cit.> that once we obtain approximate clusters as trees of G, with ϵ≤ 1/(48k^4), and provide a routing scheme for these trees, it implies a routing scheme for G with stretch 4k-5+o(1). In fact, it suffices that the routing scheme for each tree always routes through the root of the tree, not necessarily via the shortest path in the tree.Let h(u,v) denote the number of vertices on the shortest path from u to v in G. The following were also shown in <cit.> to hold with high probability.[For the sake of simplicity we will assume k is even. For odd k, we canimprove the running time by a factor of n^1/(2k).]For any u,v∈ V with h(u,v)≥ B, there exists a vertex of A_k/2 on the shortest path between them. For any 0≤ i< k-1, v∈ A_i∖ A_i+1 and u∈ C(v), it holds that h(u,v)≤ 4n^(i+1)/kln n.In particular, for i< k/2 we can findthe "exact" cluster C(v) for each v∈ A_i∖ A_i+1, by a simple limited Bellman-Ford exploration from all such vertices v to hop-depth4n^(i+1)/kln n≤Õ(√(n)). By claim:number-of-clusters, the congestion induced at each u∈ V by the merit of being a part of many clusters is only 4n^1/klog n. So the total number of rounds required is Õ(n^1/2+1/k), and each vertex needs to store at most 4n^1/klog n words (the clusters containing it). Finally, note that these clusters indeed correspond to trees, since every vertex u∈ C(v) can store as a parent the vertex who last updated the distance estimate that u has for v.From now on we consider the high levels, where i≥ k/2. Define G'=(V',E') as a virtual graph where V'=A_k/2, and E' corresponds to B-bounded distances in G. Observe that claim:hit-path implies that d_G'(v,v')=d_G(v,v') for any v,v'∈ V' (because any shortest path in G has a vertex of V' within any B hops on that path). First, we compute a (β,ϵ)-hopset H for the virtual graph G' as in thm:dist-cong-path, with parameters log k, ϵ and ρ=1/k. If one desires the second assertion of the thm:route, pick ρ=√(loglog n/log n). Note that the graph G' is implicit, and every node has internal memory Õ(m^ρ). Since |A_k/2|≤ O(√(n)) whp, the number of rounds required to compute H is at most (n^1/2+1/k+D)· (log n)^O(1/ρ) (recall ρ≥ 1/k and ϵ≥Ω(1/log^4n)).Approximate Pivots To compute the approximate pivots, conduct a Bellman-Ford exploration to depth β in G”=G'∪ H, as in lem:BF-small, rooted in A_i+1, to compute for each v∈ V' a value d̂(v,A_i+1). We perform another B-bounded exploration in G, where initially every vertex v∈ V' sends its current estimate, and in every step every vertex forwards the smallest value it has heard so far. We claim that every u∈ V will learn of an approximate (i+1)-pivot ẑ∈ A_i+1. To see this, let z be the (i+1)-pivot of u. If h(u,z)≤ B, then u will hear z's message in the last B-bounded exploration. Otherwise, by claim:hit-path, there exists a vertex v'∈ V' on the shortest path from u to z within B hops from u, and since H is a (β,ϵ)-hopset, we have that the first β rounds of Bellman-Ford exploration from A_i+1 caused v' to update d̂(v',A_i+1)≤ (1+ϵ)d_G(v',A_i+1). In the final exploration to range B, the vertex v' will communicate this value on the path towards u. Thus, u will have a value at mostd̂(u,A_i+1)≤ d_G^(B)(u,v')+d̂(v',A_i+1)≤ d_G(u,v')+(1+ϵ)d_G(v',A_i+1)≤(1+ϵ)d_G(u,A_i+1) ,where the last inequality used that d_G(u,v')+d_G(v',A_i+1)=d_G(u,A_i+1). This follows since v' lies on the shortest path from u to the nearest vertex of A_i+1. We conclude that no matter which ẑ is the approximate pivot of u, the distance estimate that u has for it cannot be larger than (1+ϵ)d_G(u,A_i+1). Computing the approximate pivots requires Õ(m^1+ρ+D)·β=(n^1/2+1/k+D)· (log n)^O(1/ρ) rounds. Approximate ClustersFix some i≥ k/2, and for each v∈ A_i∖ A_i+1 we conduct a limited Bellman-Ford exploration in G”=G'∪ H for β rounds rooted at v, as in lem:BF-small. By “limited", we mean that any vertex u∈ V' receiving a message originated at v, will forward it to its neighbors iff the current distance estimate is strictly less than d̂(u,A_i+1)/(1+ϵ)^2. We will refer to this condition, the inclusion condition of the exploration of v. We need to avoid congestion at intermediate vertices during the B-bounded exploration in G described in lem:BF-small, so these vertices will also need to implement some sort of limitation. Concretely, vertices u∈ V∖ V' will forward v's message iff their current estimate is strictly less than d̂(u,A_i+1)/(1+ϵ). The exploration over edges of H is done as before, where claim:number-of-clusters guarantees every vertex participates in 4n^1/klog n clusters (we will soon show that the approximate clusters are indeed contained in the clusters), so this bounds the number of rounds required by Õ(n^1/2+1/k+D)·β. Also the memory per vertex required from this computation is bounded by Õ(n^1/k) (the number of cluster containing the vertex).This exploration constructs a virtual tree rooted at v. For every edge (x,y)∈ E' on this tree, we add to the cluster all the vertices in G on the B-bounded path from x to y. This can be done via an acknowledgement message from y back to x on this path, and every vertex updates its parent accordingly. For every hopset edge (x,y) of the tree (which was broadcast to the entire graph during the exploration), every vertex u∈ P_x,y, where P_x,y is the path in G implementing the edge (x,y), joins the tree (u knows about being a part of this edge by the path-reporting property of our hopset), and sets its distance estimate as b_v(x)+d_P(x,u) if this value is smaller than its current estimate. If this is the case, the vertex u also sets its parent as the neighbor on P_x,y which is closer to x.Finally, we perform another limited Bellman-Ford exploration to depth B in G, where every vertex in the tree of v sends its current distance estimate, and every vertex u∈ V will forward the smallest estimate it heard so far, but iff it is strictly less than d̂(u,A_i+1)/(1+ϵ). In that case it will also join the approximate cluster of v, and will update its parent as its neighbor in G whose message caused u toupdate its distance estimate to v for the last time.Observe that the same vertex may join a tree more than once, due to several edges in E'∪ H whose paths contain it. In such a case the vertex will have as a parent the vertex which minimize the estimated distance to the root. Since every vertex has a single parent, we will have that the approximate cluster of v, C̃(v), is indeed a tree. Itremains to prove (<ref>). Let b_v(u) be the distance estimate that u has to v in the exploration rooted at v. For any v∈ V', C̃(v)⊆ C(v). Consider any u∈C̃(v). If it is the case that u∈ V joined the approximate cluster by the exploration rooted at v, either by being in V' or on a B-bounded path in G that implements an edge of E', then it must satisfy b_v(u)<d̂(u,A_i+1)/(1+ϵ). Now,d_G(u,v)≤ b_v(u)<d̂(u,A_i+1)/(1+ϵ)(<ref>)≤ d_G(u,A_i+1) ,so indeed u ∈ C(v). The other case is that u in P_x,y for a path P_x,y implementinga hopset edge (x,y) that was added to the virtual tree. Since y joins the approximate cluster, it must satisfy b_v(y)<d̂(y,A_i+1)/(1+ϵ)^2. Recall that the weight of the hopset edge w_H(x,y) is the weight of the path P = P_x,y from x to y in G that u lies on. Hence d_P(x,u)+d_P(u,y)=w_H(x,y). It follows thatd_G(u,A_i+1) (<ref>)≥ d̂(u,A_i+1)/1+ϵ≥d_G(u,A_i+1)/1+ϵ≥d_G(y,A_i+1)-d_G(u,y)/1+ϵ(<ref>)≥ d̂(y,A_i+1)/(1+ϵ)^2-d_P(u,y)/1+ϵ> b_v(y)-d_P(u,y)=b_v(x)+w_H(x,y)-d_P(u,y) = b_v(x)+d_P(x,u)≥b_v(u)≥ d_G(u,v) ,where in the penultimate inequality we used the fact that the vertex u knows d_P(x,u), and thus it could have updated its distance estimate to v as b_v(x)+d_P(x,u) (note that it may have used a smaller estimate). Thus u∈ C(v) in this case, as required. The next claim proves the second inequality of (<ref>). For any v∈ V', C_6ϵ(v)⊆C̃(v). Let u∈ C_6ϵ(v).We would like to show that u∈C̃(v). Consider the shortest path P from u to v in G. Then by claim:hit-path, there is a vertex u'∈ V' on P that is within B hops from u. Notice thatd_G(v,u')=d_G(v,u)-d_G(u,u')≤d_G(u,A_i+1)-d_G(u,u')/1+6ϵ≤d_G(u',A_i+1)/1+6ϵ .Hence u' ∈ C_6(v) too.We will show that the limited exploration originated at v will reach u', and in the final depth B exploration it will reach u and include it in C̃(v).Since H is a (β,ϵ)-hopset, there is a path P' in G” from v to u' that contains at most β edges that satisfiesd_P'(v,u')≤ (1+ϵ)d_G'(v,u')=(1+ϵ)d_G(v,u') .Let z∈ P' be any vertex on P' that lies t hops from v, 0 ≤ t ≤β.Then after t steps of Bellman-Ford exploration from v we have thatb_z(v) = d_P'(v,z) = d_P'(v,u')-d_P'(z,u')(<ref>)≤ (1+ϵ)d_G(v,u')-d_G(z,u')(<ref>)≤(1+ϵ)d_G(u',A_i+1)/1+6ϵ-d_G(z,u')≤ d_G(u',A_i+1)-d_G(z,u')/1+4ϵ<d_G(z,A_i+1)/(1+ϵ)^2≤d̂(z,A_i+1)/(1+ϵ)^2 .(We used that ϵ<1/5.) We conclude that z satisfies the inclusion condition for the exploration rooted at v, and forwards the message of v onwards. In particular, by (<ref>), b_v(u')≤ d_P'(v,u')≤(1+ϵ)d_G(v,u'). In the final phase we make a Bellman-Ford exploration for B rounds in G from each vertex that received themessage of v. Thus, u' will start such an exploration with distance estimate b_v(u'). Consider the subpath Q⊆ P from u' to u. We have to show that every vertex on this path forwards the message of v, that is, that it satisfies the inclusion condition of the exploration of v. Let y∈ Q be such a vertex. Since this is a shortest path in G, we haveb_v(y) ≤b_v(u')+d_Q(u',y)≤(1+ϵ)d_G(v,u')+d_G(u',y)≤ (1+ϵ)d_G(v,y) = (1+ϵ)(d_G(v,u)-d_G(y,u))(<ref>)≤ (1+ϵ)d_G(u,A_i+1)/1+6ϵ-d_G(y,u)≤d_G(u,A_i+1)-d_G(y,u)/1+4ϵ≤ d_G(y,A_i+1)/1+4ϵ<d̂(y,A_i+1)/1+ϵ ,as required.§.§ Distributed Tree Routing with Small Memory In this section we present our compact routing scheme for trees that can be computed in a distributed manner using small internal memory. In previous constructions of distributed routing schemes for trees <cit.>, the internal memory was as high as √(n), and it was also somewhat inefficient:the label size is O(log^2n) and the routing tables are of size O(log n). Compare this to the classical <cit.> tree routing, which has label size O(log n) and routing tables of size O(1). We follow the basic framework of previous works, by selecting a set U⊆ V, such that each vertex is sampled to U independently with probability q (q is a parameter, which we shall optimize later). Fix a tree T on vertices V(T)⊆ V with root z. The vertices U(T)=(U∩ V(T))∪{z} partition the tree into subtrees, by removing the edges from each vertex in U(T) to its parent. Each of the |U(T)| subtrees is rooted in a vertex of U(T). Denote by T_w the subtree rooted at w. We also consider T', the virtual tree on the vertices of U(T), which is rooted at z, and contains an edge (x,y) if the parent of y lies in T_x. It is not hard to see (e.g., <cit.>) that whp the depth of each T_w is Õ(1/q),and that |U|≤ O(qn).In both <cit.>, routing schemes were created for each T_w, and also a routing scheme for the virtual tree T'. This computation required large internal memory, since z had to locally compute the scheme for T'. The inefficiency in the size was due to the fact that when routing in T', traveling over a virtual edge (x,y), one has to route in T_x from x to the parent of y. This seems to require storing additional routing information for this subtree, increasing both label and table size by a logarithmic factor. We overcome this issue by storing routing information only with respect to the actual tree, while applying pointer jumping techniques to quickly compute the full labels. However, we do not know how to construct exact tree routing with small memory. Fortunately, to implement our routing scheme for general graphs, it suffices to provide a root-tree routing scheme, where the routing is alwaysdone via the root of the tree T, and not necessarily via the shortest path. (We stress that using larger memory, we can compute exact tree routing tables and labels within Õ(√(n)+D) rounds, with label size O(log n) and routing tables of size O(1), substantially improving previous results.)Before describing our approach, let us briefly recall the Thorup-Zwick construction of tree routing. The idea is to assign to every (non-leaf) vertex x∈ T its heavy child, which is the child whose subtree has maximal size. Note that the subtree of any non-heavy child of x contains at most halfof the vertices ofthesubtreeT_xof T rooted at x. For this reason, any path from the root z to some y∈ T contains at most log n non-heavy edges. For an exact routing scheme they also conducts a DFS search in T that assigns to each y the DFS entry and exit times for its subtree. The label of y is these entry and exit times, and also the names of the non-heavy edges on the z to y path. The routing table yconsists of the DFS times, the name of the heavy child, and the name of the parent of y in the tree. The routing towards a target v in the tree is done as follows. At any intermediate vertex y∈ T, if v is not in the subtree rooted at y (this can be checked via the DFS times), then y forwards to its parent. If v is in the subtree, y inspects v's label to see if an edge (y,x) appears there.If this is the case, it forwards to x, otherwise to its heavy child. Note that if one desires root-tree routing then there is no need to implement a DFS – initially route to the parent until the root is reached, and then follow the path using heavy edges unless the label indicates otherwise.Now we show how to implement our scheme in a distributed manner, and with O(log n) internal memory.First, every w∈ U(T) sends a message about itself to the vertices of T_w, informing them they are in T_w. Note that this message will arrive to all vertices in T(U) who are children of w in the virtual tree T', so they will know their parent. Next, for each w∈ U(T), every vertex in T_w sends to its parent the size of the subtree rooted at it, beginning with the leaves. Every vertex that received messages from all its children, sums up the values and sends to its own parent. This can be done in parallel for all trees T_w for w∈ U(T), and will take Õ(1/q) (the bound on the height of each T_w) rounds.For a vertex v in a tree T, rooted at a vertex z, and a positive integer h, we say that a vertex u is an h-ancestor of v, if u lies on the unique v-z path in T at distance h from v.We would like that every y∈ T will know the entire size of the subtree of T rooted at y. Initially, we compute this value only for the virtual vertices of U(T). Fora vertexx ∈ U(T), its subtree size is exactly the sum of sizes of subtrees T_w for w that are in the subtree of T' rooted at x. Note that computing these values from the leaves of T' up will not be efficient, since every message on a virtual edge may require O(D) rounds, and the depth of T' may be as large as qn (which will be approximately √(n)). Thus, thisresults in O(D√(n)) rounds. To alleviate this issue, we use the following "pointer jumping" technique. Initially, set for x∈ U(T) the current size s_x=|T_x|, and its first ancestor a_1(x) as its parent in T' (and for the root z, set a_1(z)=). For i=0,1,…,log n rounds, every vertex x∈ U(T) will broadcast in the ith round (using the BFS tree of G), the current size s_x and the name of its 2^i-ancestor a_i(x) in T'. Then whenever x hears a message that some w∈ U(T) broadcasts with x=a_i(w), then x adds s_w to its current size s_x. In addition, the vertex x hears the message of a_i(x), and itupdates a_i+1(x) as a_i(a_i(x)). (It could be the case that a_i(a_i(x))=.In this case, indeed, a_i+1(x)=.) We claim that this process correctly computes for any x∈ U(T) the size of the subtree of T rooted at x. It can be shown by induction on i, that before the ith round, s_x is the size of the subtree rooted at x that contains at most 2^i vertices of U(T) on any root-leaf path. There are O(|U(T)|)≤Õ(qn) messages sent on each round for log n rounds. Hence, it will take Õ(qn+D) rounds to implement this step. In order to compute s_y, the size of the subtree of T rooted at y, for all y∈ T, every x∈ U(T) informs its parent in T with the value s_x. Then once again, for every w∈ U(T) in parallel, the leaves of T_w start to send to their parent their current size. This time, some of these leaves and internal vertices could be parents of vertices in U(T), so these sizes are the actual subtree size in T. In Õ(1/q) rounds, every vertex y∈ T will know s_y. After sending these values to the parents, every vertex can infer who is its heavy child.The label L(y) needed for root-tree routing is just the collection of edges {(u,v)} that are on the z-y path in T, such that v is not the heavy child of u. Clearly, there can be at most log n such edges on this path, because the size of the subtree decreases by a factor of 2 for every non-heavy edge. If y∈ T_x, we start by computing a partial label that contains non-heavy edges on the path from x to y. This can be done by initializing L(x)=∅, and starting at x, any vertex u∈ T_x which received a label L(u), sends L(u) to its heavy child, and L(u)∪{(u,v)} for any non-heavy child v. These labels are also sent to the children of x in T' (recall that these are the vertices T(U) whose T-parents belong to T_x). Once this computation is completed, every vertex w∈ T(U) knows the non-heavy edges on the path from x, its parent in T', to w. We again apply pointer jumping to compute the full labels. For i=0,1,…,log n, every vertex of U(T) will broadcast in the ith round its current label. In each round, when x hears the message from its 2^j-ancestor a_j(x) (recall that x computed previously its 2^j-ancestors, for all j = 0,1,…,log n, and it stored them in its internal memory), it will update L(x)← L(a_j(x))∪ L(x). Once again, it can be proved by induction on i that before the ith round, every x∈ U(T) knows all the non-heavy edges on the path in T from a_i(x) to x (or from the root z to x if a_i(x)=). Since every label has size O(log n), this will require Õ(qn+D) rounds. Finally, in another Õ(1/q) rounds, each x∈ U(T) sends its updated label L(x) to every vertex y∈ T_x, and they update their label by appending L(x).If one desires a routing scheme for a single tree, just take q=1/√(n), so the running time will be Õ(√(n)+D). If we desire to compute a routing scheme in parallel for multiple trees, but have the guarantee that every v∈ V belongs to at most s trees, then we can use the argument as in <cit.> to obtain running time Õ(√(s· n)+D) (rather than the naive Õ(s·√(n)+D)). We conclude by formally summarizing our result. For any tree T on n vertices, lying in a network with hop-diameter D, there exists a distributed algorithm in the CONGEST model running in Õ(√(n)+D) rounds, that computes a root-tree routing scheme with label size O(log n) and routing tables of size O(1), such that every vertex uses only O(log n) words of memory throughout the computation.Moreover, if there are no restriction on the memory used throughout the computation, then exact tree routing tables of size O(1) and labels of size O(log n) can be computed in (√(n) + D) time.In addition, given a network with n vertices and a set of trees so that each vertex is contained in at most s trees, one can compute a root-tree routing scheme as above for all trees in parallel, within Õ(√(s· n)+D) rounds, while using memory O(s·log n) at each vertex. § ACKNOWLEDGEMENTS We wish to thank Christoph Lenzen for raising to us the problem of distributed routing with small individual memory requirements, and for permitting us to use a quotation from <cit.>. alpha
http://arxiv.org/abs/1704.08468v1
{ "authors": [ "Michael Elkin", "Ofer Neiman" ], "categories": [ "cs.DS" ], "primary_category": "cs.DS", "published": "20170427080822", "title": "Linear-Size Hopsets with Small Hopbound, and Distributed Routing with Low Memory" }
Mahamat Saleh Bouetou B T T C Kofane Department of Physics, Faculty of Science, University of Yaounde I, P.O. Box. 812, CameroonMahamat SalehDepartment of Physics, Higher Teachers' Training College, University of Maroua, P.O. Box. 55, Cameroon,[email protected] Bouetou Bouetou ThomasNational Advanced School of Engineering, University of Yaounde I, P.O. Box 8390, Cameroon,[email protected] Kofane Timoleon Crepin The Max Planck Institute for the Physics of Complex Systems, Nöthnitzer Strasse 38, 01187 Dresden, [email protected] Thermodynamics and Phase transition from regular Bardeen black hole Mahamat Saleh Bouetou Bouetou Thomas Timoleon Crepin Kofane Received: date / Accepted: date ========================================================================= In this paper, thermodynamics and phase transition are investigated for the regular Bardeen black hole. Considering the metric of the Bardeen spacetime, we derived the Unruh-Verlinde temperature. Using the first law of thermodynamics, we derived the expression of the specific heat and plot its behavior. It results that the magnetic monopole charge β reduces the temperature and induces a thermodynamics phase transition in the spacetime. Moreover, when increasing β, the transition point moves to higher entropy. § INTRODUCTION Nowadays research in physics focused a lot attention on the evolution of the Universe and particularly on black holes and strange phenomena enclosed to it. Over the last four decades, many researches were done on black hole physics and concern quasinormal modes<cit.>, energy distribution<cit.>, radiation and thermodynamics<cit.>. Hawking's discovery of thermal radiation from black holes was a complete surprise to most specialists. This discovery definitely adds thermodynamical aspects to black hole investigation. As thermodynamical objects, black holes can be described by some thermodynamics quantities such as temperature, entropy, enthalpy, free energy and heat capacity. One of the most interesting phenomena in the study of thermodynamical systems is their phase transition. Thermal fluctuations can induce phase transitions in which the zero-temperature degrees of freedom are reorganized into a qualitatively different form. There are many familiar examples such as the boiling of water, ferromagnetic transitions (magnetisation), phase transitions also occur in more exotic systems like spacetime geometry<cit.>. The phase transition point of a system is generally represented by a discontinuity in the variation of such thermodynamical quantity of the system. For a glassy transition for example, there is a discontinuity in the variation of the volume with the temperature at the zero temperature. It is well known that at that particular temperature, the ice-water transition held.Although the phase transition of several systems, such as water, can be investigated precisely in laboratory, no laboratory is yet equipped with the tools necessary to probe the formation of a black hole in a phase transition of thermal spacetime, although such situations may have existed in the very early universe<cit.>. The black hole phase transition can be studied theoretically in the light of the expression of its heat capacity.In 1968, the first example of a black hole with regular non-singular geometry with an event horizon satisfying weak energy condition was constructed by Bardeen. The solution was obtained introducing an energy-momentum tensor interpreted as the gravitational field of some sort of non linear magnetic monopole charge β.Recently, Huang et al.<cit.> considered the spherically symmetric regular black hole solution obtained by Bardeen to investigate absorption cross section and Hawking radiation. They show that the magnetic monopole charge, β, increases the absorption cross section and the power emission spectra of Hawking radiation but decreases the absorption probability and the luminosity. Meitei et al.<cit.> investigate phase transition in the Reissner-Nordström black hole. Bouetou et al.<cit.> investigated thermodynamics and phase transitionof the Reissner-Nordström black hole surrounded by quintessence. In this paper, thermodynamics and phase transition are investigated for the regular Bardeen black hole.The paper is organized as follows. In section  <ref>, we derive the Unruh-Verlinde temperature and the Hawking temperature for the regular Bardeen black hole. In section  <ref>, we derive the specific heat and investigate the thermodynamics phase transition in the black hole. The last section is devoted to a conclusion.§ UNRUH-VERLINDE AND HAWKING TEMPERATURES FOR THE REGULAR BARDEEN BLACK HOLE The spherically symmetric Bardeen regular black hole metric is given byds^2=-f(r)dt^2+f^-1(r)dr^2+r^2dθ^2+r^2sin^2θ dφ^2,the lapse function f(r)=1-2M(r)/r depends on the specific form of underlying matter. With the following particular value of the mass function,M(r)=mr^3/(r^2+β^2)^3/2,where β is the monopole charge of a self-gravitating magnetic field described by a nonlinear electrodynamics source, and m is the mass of the black hole, the metric (<ref>) reduces to the Bardeen regular black hole metric<cit.>. The temperature of the black hole at a given radius r is given by the Unruh-Verlinde matching<cit.>. The Unruh-Verlinde temperature is given by<cit.>T=ħ/2πe^ϕ n^α∇_αϕ,where n_α is a unit vector, that is normal to the holographic screen, e^ϕ is the red-shift factor and ϕ the generalized form of the Newtonian potential given byϕ=1/2log(-g^μνξ_μξ_ν),with g^μν the background metric and ξ_μ the Killing time-like vector. The Killing vector for our spherically symmetric spacetime isξ_μ=(-f(r), 0, 0, 0).Substituting Eq. (<ref>) into Eq. (<ref>), we can obtain the following expression for the potentialϕ=1/2log(f(r))and the expression of theUnruh-Verlinde temperature is given byT=ħ/4π|f'(r)|.Substituting Eqs. (<ref>) and (<ref>) into (<ref>), the Unruh-Verlinde temperature for the regular Bardeen black hole isT=mr(r^2-2β^2)/2π(r^2+β^2)^5/2.Setting β=0, the temperature becomes T=m/2π r^2, which is the Unruh-Verlinde temperature for the Schwarzschild black hole.The event horizon of the black hole is given byf(r_h)=1-2mr_h^2/(r_h^2+β^2)^3/2=0.The entropy of the black hole is given by the area lawS=A/4=π r_h^2.The mass of the black hole can then be expressed as function of the entropy as followsm=1/2√(S/π)(1+πβ^2/S)^3/2. Applying the first law of thermodynamics, the Hawking temperature (temperature at the horizon) isT_H=1/4√(π S)(1+πβ^2/S)^1/2(1-2πβ^2/S).This expression of the temperature can be used to derive other thermodynamics quantities of the black hole.The behavior of the temperature is shown on Fig. <ref>.Through this figure, we can see that the temperature decreases when increasing β. § SPECIFIC HEAT AND PHASE TRANSITION The internal energy of the black hole of mass m can be expressed by the Einstein formula E=mc^2. Considering this expression and taking into account the first law of thermodynamics, the temperature of the black hole can be derived as:T=∂ E/∂ S,with S the entropy of the black hole. The specific heat of the black hole is then given byC=T∂ S/∂ T.Using the above equations, the specific heat can then be expressed as:C=-2S(S+πβ^2)(S-2πβ^2)/S^2-4πβ^2S-8π^2β^4.For β=0, this expression reduces to C=-2S which corresponds to the expression for the Schwarzschild black hole. The specific heat is negative (see Fig. <ref>) showing that the black hole is thermodynamically unstable.For β0, we plot the behavior of the specific heat of the black hole when increasing S for different values of β. This is shown on Figs.<ref> and <ref>.Through these figures, we can remark that when taking into account such positive value of β, it moves the specific heat to positive value for lower entropies showing that the black hole is at stable phase. Moreover, when increasing the entropy S, it occurs a discontinuity point where the specific heat changes from positive to negative values showing that the black hole passes from stable to unstable phase. When increasing β, the transition point moves to higher entropy. § CONCLUSION Thermodynamics behavior of the regular Bardeen black hole is investigated. From the metric of the black hole, the Unruh-Verlinde temperature is expressed. The Hawking temperature of the black hole and the specific heat are also derived using the laws of black holes thermodynamics. The behavior of the Hawking temperature plotted on Fig. <ref> shows that the monopole charge β decreases the temperature of the black hole. Through the behavior of the specific heat of the black hole plotted on Figs. <ref> and <ref>, we can see that the monopole charge β stabilizes thermodynamically the black hole and when increasing the entropy of the black hole, it occurs a transition point where the black hole moves from stable thermodynamic phase to unstable one. Moreover, the transition point moves to higher entropies when increasing β. 999R9 Cardoso, V., Lemos, J.P.S.: Phys. Rev. D 63, 124015 (2001) R10 Konoplya, R.A.: Phys. Rev. D 66, 084007 (2002) R11 Starinets, A.O.: Phys. Rev. D 66, 124013 (2002) R12 Setare, M.R.: Class. Quant. Grav. 21, 1453 (2003) R12a Setare, M.R.: Phys. Rev. D 69, 044016 (2004) R13 Natario, J., Schiappa, R.: hep-th/0411267 R14 Leaver, E.W.: Pro. R. Soc. Lond. A 402, 285 (1985) R14a Leaver, E.W.: Phys. Rev. D 34, 384 (1986) R15 Cho, H.T.: Phys. Rev. D 68, 024003 (2003) R16 Zhidenko, A.: Class. Quant. Grav. 21, 273 (2004) R17 Moss, I.G., Norman, J.P.: Class. Quantum Grav. 19, 2323 (2002) MelMos Mellor, F., Moss, I.G.: Phys. Rev. D 41, 403 (1990) R18 Jing, J.L.: Phys. Rev. D 69, 084009 (2004) R19 Castello-Branco, K.H.C., Konoplya, R.A., Zhidenko, A.: Phys. Rev. D 71, 047502 (2005) R20 Jing, J.L.: Phys. Rev. D 70, 065004 (2004) R20a Jing, J.L.: Phys. Rev. D 71, 124011 (2005) R20b Jing, J.L.: Phys. Rev. D 71, 124006 (2005) mah Mahamat, S., Bouetou, B.T., Kofane, T.C.: Chin. Phys. Lett. 26, 109802 (2009) mah2 Mahamat, S., Bouetou, B.T., Kofane, T.C.: Astrophys. Space Sci. 333, 449 (2011)cs F. I. Cooperstockand R. S. Sarracino, J. Phys. A: Math. Gen., 11 (1978) 877. R1 A. Einstein, Preuss. Akad. Wiss. Berlin 47 (1915) 778. R2 A. Papapetrou, Proceedings of the Royal Irish Academy A 52 (1948) 11. R3 R. C. Tolman, Phys. Rev. 35 (1930) 875 R4 L. D. Landau and E. M. Lifschitz, The Classical Theory of Fields (Addison-Wesley Press, Reading, Massachusetts., New York, 1951). R5 P. G. Bergmann and R. Thomson, Phys. Rev. 89 (1953) 400. R6 S. Weinberg, Gravitation and Cosmology: Principles and Applications of General Theory of Relativity (John Wiley and Sons, Inc., New York, 1972). R7 J. N. Goldberg, Phys. Rev. 111 (1958) 315. xulu2 S. S. Xulu,Int. J. Theor. Phys. 46 (2007) 2915. rad I. Radinschi and T. Grammenos,Int. J. Theor. Phys. 47 (2008) 1363. vag E. C. Vagenas, Mod. Phys. Lett. A 21 (2006) 1947. vag2 E. C. Vagenas, Mod. Phys. Lett. A 19 (2004) 213. lali S. S. Xulu,arXiv:gr-qc/0304081v1. vag3 E. C. Vagenas, Int. J. Mod. Phys. A 18 (2003) 5949. virb1 K. S. Virbhadra, Phys. Rev. D 41 (1990) 1086. virb2 K. S. Virbhadra, Phys. Rev. D 42 (1990) 2919. virb3 K. S. Virbhadra, Phys. Rev. D 42 (1990) 1066. aguir J. M. Aguirregabiria, A. Chamorro and K. S. Virbhadra, Gen. Rel. Grav. 28 (1996) 1393. xul S. S. Xulu, Int. J. Mod. Phys. D 7 (1998) 773[arXiv:hep-th/9804140v1]. rad1 I. Radinschi, Acta Physica Slovaca 49 (1999) 789. rad2 I. Radinschi, FIZIKA B 9 (2000) 43. rad3 I. Radinschi, Mod. Phys. Lett. 15 (2000) 803 [arXiv:gr-qc/0008029v1].xulubt S. Xulu, Int. J. Theor. Phys. 46 (2007) 2915 [arXiv:gr-qc/0702066v1]. msbk S. Mahamat, T.B. Bouetou and T.C. Kofane, Commun. Theor. Phys. 55 (2011) 291.haw S. W. Hawking, Commun. Math. Phys. 43 (1975) 199. haw2 S. W. Hawking, Nature 30 (1974) 248. bek2 J. D. Bekenstein, Phys. Rev. D 7 (1973) 2333. wald R. M. Wald, Living Rev. Rel. 4 (2001) 6 [arXiv:gr-qc/9912119v2]. carlip1 S. Carlip, Phys. Rev. D 51 (1995) 632. carlip2 S. Carlip, Phys. Rev. D 55 (1997) 878. stromva Strominger and Vafa, Phys. Lett. B 379 (1996) 99. prl80 A. Ashtekar, J. Baez, A. Corichi and K. Krasnov, Phys. Rev. Lett. 80 (1998) 904. pete M. Pete, arXiv:physics.pop-ph/0906.4849v1. fabnag A. Farmanya, S. Abbasi and A. Naghipour, Acta Phys. Pol. A 114 (2008) 651. zhao2hu R. Zhao, H.-X. Zhao and S.-Q. Hu, Mod. Phys. Lett. A 22 (2007) 1737 [arXiv:gr-qc/0609080]. gibhaw G. W. Gibbons and S. W. Hawking, Phys. Rev. D 15 (1977) 2738. pw2 M. K. Parikh and F. Wilczek, Phys. Rev. D 58 (1998) 064011. bacaha J. M. Bardeen, B. Carter, and S. W. Hawking, Comm. Math. Phys.31 (1973) 161. pht1 P. Hut, Mon. Not. R. Astr. Soc. 180 (1977) 379. pht2 P. C. W. Davies, Class. Quantum Grav. 6(1989) 1909. pht3 G. J. Stephens and B. L. Hu, Int. J. Theor. Phys. 40 (2001) 2183.phtphd J. D. Marsano, Phase Transitions in Yang-Mills Theories and their Gravity Duals, PhD Thesis, Harvard University, Cambridge, Massachusetts (2006) pht4 B. P. Nayak, Prayas 3 (2008) 1. pht5 R. Banerjee, S. K. Modak and S. Samanta, Eur. Phys. J. C 70 (2002) 317 [arXiv:1002.0466v3]. pht6 A. Flachi and T. Tanaka, Phys. Rev. D 84 (2011) 061503(R). huang H. Huang, M. Jiang, J. Chen and Y. Wang, Gen. Relativ. Gravit. 47 (2015) 8. meitei I. A. Meitei, K. Y. Singh, T. I. Singh and N. Ibohal, Astrophys. Space Sci. 327 (2010) 67. bouetou T.B. Bouetou, S. Mahamat and T.C. Kofane, Gen. Relativ. Gravit. 44 (2012) 2181. bard J. Bardeen, in Proceedings of GR5, Tbilisi, URSS, (1968). sharif M. Sharif and W. Javed,J. Korean Phys. Soc. 57 (2010) 217. sharif1 M. Sharif and W. Javed, Can. J. Phys. 89 (2011) 1027. Verlinde Verlinde E 2011 JHEP 04 029. Unruh Unruh W G 1976 Phys. Rev. D 14 870. Konoplya Konoplya R A 2010 Eur. Phys. J. C69 555. Liuwangwei Liu Y-X, Wang Y-Q and Wei S-W 2010 Class. Quantum Grav. 27 185002. hanlan Han Y andLan M (2011) Int. J. Theor. Phys. 50 899.
http://arxiv.org/abs/1704.08302v1
{ "authors": [ "Mahamat Saleh", "Bouetou Bouetou Thomas", "Kofané Timoléon Crépin" ], "categories": [ "gr-qc" ], "primary_category": "gr-qc", "published": "20170426190827", "title": "Thermodynamics and Phase transition from regular Bardeen black hole" }
lemmaLemma
http://arxiv.org/abs/1704.08223v3
{ "authors": [ "Alley Hameedi", "Armin Tavakoli", "Breno Marques", "Mohamed Bourennane" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20170426172211", "title": "Communication games reveal preparation contextuality" }
[email protected] Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, China School of Physical Sciences, University of Chinese Academy of Sciences, Beijing 101408, China School of Physical Sciences, University of Chinese Academy of Sciences, Beijing 101408, China Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, China School of Physical Sciences, University of Chinese Academy of Sciences, Beijing 101408, China Theoretical Physics Center for Science Facilities (TPCSF), CAS, Beijing 100049, China College of Physics and Technology, Guangxi Normal University, Guilin 541004, China Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, China Theoretical Physics Center for Science Facilities (TPCSF), CAS, Beijing 100049, China Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, China Theoretical Physics Center for Science Facilities (TPCSF), CAS, Beijing 100049, ChinaThe structure of d^*(2380) is re-studied with the single cluster structure in the chiral SU(3) quark model which has successfully been employed to explain the N-N scattering data and the binding energy of deuteron. The binding behavior of such a six quark system is solved by using a variational method. The trial wave function is chosen to be a combination of a basic spherical symmetric component of [(0s)^6]_orb in the orbital space with 0ħω excitation and an inner structural deformation component of [(0s)^5(1s)]_orb and [(0s)^4(0p)^2]_orb in the orbital space with 2ħω excitation, both of which are in the spatial [6] symmetry. It is shown that the mass of the system is about 2356 MeV, which is qualitative consistent with the result both from the two-cluster configuration calculation and from the data measured by the WASA Collaborations. This result tells us that as long as the medium-range interaction due to the chiral symmetry consideration is properly introduced, the mass of system will be reduced in a rather large extent. It also implies that the observed d^* is a six-quark bound state with respect to the ΔΔ threshold, which again supports the conclusion that d^* is a hexaquark dominant state.14.20.Pt, 13.75Cs, 12.39Jh, 13.30.Eg Six-quark structure of d^*(2380) in chiral constituent quark model Zong-Ye Zhang December 30, 2023 ==================================================================§ INTRODUCTION During past years, a resonant structure d^*(2380) has been reported in the double-pion fusion reactions pn → d π^0 π^0 and pn → d π^+ π^- by the WASA-at-COSY collaborations <cit.>. Later on, this resonance has also been observed in pn → pn π^0 π^0, pn → pp π^- π^0, pd → ^3Heπ^0 π^0, pd → ^3Heπ^+ π^-, dd → ^4Heπ^0 π^0, and dd → ^4Heπ^+ π^- reactions<cit.>, and further confirmed by incorporating the newly measured analyzing power data into the partial wave analysis<cit.>. The data show that d^*(2380) has a mass of 2380 MeV, a width of Γ≈ 70 MeV, and an isospin-spin-parity of I(J^P)=0(3^+) <cit.>.Since the mass of d^*(2380) is away from the thresholds of the ΔΔ, Δ N π, NN ππ channels, the threshold effect is expected to be smaller than that in some exotic XYZ states<cit.>. The structural uncertainty in studying d^*(2380) would be much smaller. On the other hand, although observed mass is higher but not much higher than the Δ N π and NN ππ thresholds, its width is only 70 MeV, which is much smaller than the width of two Δs. The fact that the width of d^*(2380) is remarkably small excludes the scenario of the naïve ΔΔ molecular structure where the Δs are color singlet particles and indicates that the effect of the hidden-color channel should be significant<cit.>. Due to these extraordinary properties of d^*(2380), it becomes a good platform to reveal some information about the new structure in the hadronic system.The properties of dibaryon states were firstly discussed by Dyson and Xuong in 1964 in the framework of SU(6) symmetry where no dynamics is considered <cit.>. Since then, various theoretical investigations on dibaryon have been performed. Recently, Gal and Garcilazo studied the π N Δ system in a Faddeev type three-body calculation and dynamically generated a pole where its mass and width are close to the data of WASA, although some approximations were employed <cit.>. In Ref. <cit.>, H. Huang, et al, investigated the binding behavior of the ΔΔ system in a coupled channel calculation in the framework of the chiral SU(2) model and obtained a binding energy of about 71MeV and a width of about 150MeV, which is much larger than the reported data. Even in a QCD sum rule calculation, one can also get a mass of 2.4±0.2 GeV<cit.>. However, a recent calculation by using a constitute quark model with the one-gluon-exchange (OGE) and confinement interactions only showed that the six u-d quark system with the [6] symmetry in the orbital space should not be bound <cit.>.It is noteworthy that in a much earlier calculation in the chiral SU(3) quark model, the binding property of the ΔΔ system with I(J^P)=I(S^P)=0(3^+), where I, J(or S), and P stand for the isospin, spin, and parity, respectively, was studied by including a hidden color (CC) component , and a bound state with a binding energy of 40-80 MeV relative to the threshold of the ΔΔ channel was predicted<cit.>. After the new discovery by the WASA-at-COSY collaborations, more detailed calculations for such a state have been performed on the base of the chiral SU(3) quark model and extended chiral SU(3) quark model<cit.>. In the framework of the Resonating Group Method (RGM), the mass and wave function of the state are obtained by dynamically solving the coupled-channel equations where the coupling of the ΔΔ channel with a hidden color channel has been considered. The partial decay widths of the d^* → d ππ, d^* → NN ππ and d^* → NN π processes are evaluated in terms of the extracted wave function, and the total width of about 71MeV of d^*, which coincides with the averaged experimental value of 75MeV, is obtained <cit.>. It is shown that due to a large CC component in the system, the resultant mass and width are compatible with the data, namely, such a component plays an essential role in interpreting the observed characters of d^*, especially its narrow width. Thus, one would conjecture that d^*(2380) might be a hexaquark dominated exotic state.Inspired by a large CC component in d^*(2380) and a small size in the coordinate space, it is reasonable to study the (IS)=(03) six quark system in an alternative model space with a single cluster configuration (SCC). On the other hand, according to Harvey's relation from the group theory <cit.>, in a six quark system, single-cluster configurations and two-cluster configurations (TCC) can transform each other via Fierz transformation. In this sense, if d^*(2380) is a hexaquark dominated state, the major characters obtained in the TCC calculation, say the binding behavior and the narrow width, should also appear in the SCC calculation. This is because that by re-arranging the form of the wave function obtained in the TCC calculation, one finds that the main component in TCC is a genuine six-quark configuration (0s)^6[6]_orb[111111]_SIC, or called hexaquark configuration. Therefore, the main aim of this paper is to see whether SCC has similar properties as those obtained in the TCC calculation. In this work, we would re-study such a six quark system in a SCC calculation with the same chiral SU(3) quark model and the same model parameters. The trivial wave function consists of a (0s)^6[6]_orb[111111]_SIC component together with a component with a 2ħω-excitation, which is orthogonal to the wave function of the excited center of mass motion.It should particularly be emphasized that due to the importance of the chiral symmetry in the strong interaction, such a symmetry should be restored in the Lagrangian of the hadronic system, which leads to the well-known σ model <cit.>. In the QCD-inspired constituent quark model, the spontaneous symmetry breaking of vacuum generates the Goldstone boson, and consequently, the constituent quark mass. Based on this Goldstone theory, the Goldstone boson has to be introduced if a constituent quark model is adopted. That is why a chiral SU(3) constituent quark model was proposed. With such a model, most data of the ground state properties of baryons, the baryon spectrum, the baryon-baryon scattering phase shifts and cross sections, and some binding behaviors of two-hadron systems, for instance, deuteron and H particle, etc. can be explained in quite good extent <cit.>, although this model is somehow a preliminary attempt to modeling the non-perturbative effect of QCD (NPQCD). However, if in a constituent quark model, the inter-quark interaction includes only the OGE and confinement terms (naive OGE quark model), the scattering and binding behaviors between nucleons might not be reasonably explained, because the medium-range NPQCD, which is described by the Goldstone boson exchange in our chiral constituent quark model, is missing. Therefore, another goal in this paper is to see that if the interactions arising from the Goldstone boson exchange are incorporated into the naïve OGE quark model, whether the conclusion in Ref. <cit.> could be changed.The paper is organized as follows. In Sect.<ref>, the formalism for both interaction and wave functions is briefly introduced. The results and discussions are given in Sect.<ref>. Finally, a short summary is provided in Sect.<ref>.§ BRIEF FORMULISM §.§ Interaction The interactive Lagrangian between the quark and chiral field in the chiral SU(3) constituent quark model can be written asL_I^ch = -g_chψ̅(∑_a=0^8 λ_a σ_a + i γ_5 ∑_a=0^8 λ_a π_a) ψ,where g_ch is the coupling constant of quark with the chiral field, ψ is the quark field, and σ_a and π_a (a=0,1,...,8) are the scalar and pseudo-scalar nonet chiral fields, respectively. Then, the interactive Hamiltonian can be obtained by,H_I^ch = g_ch F(q^2) ψ̅(∑_a=0^8 λ_a σ_a +i γ_5 ∑_a=0^8 λ_a π_a) ψ,where the form factor F(q^2) is introduced to imitate the structures of the chiral fields. The form of F(q^2) is usually taken as F(q^2)= (Λ^2/Λ^2+q^2)^1/2,with Λ being the cutoff mass, which corresponds to the scale of the chiral symmetry breaking<cit.>. From this Hamiltonian, the chiral field caused quark-quark interaction V^σ_a and V^π_a, which mainly provide the medium-range interaction from NPQCD, can easily be derived. To reasonably describe the short-range interaction from the perturbative QCD (pQCD), one-gluon-exchange (OGE) interaction V^OGE is still employed. It should be emphasized that double counting does not occur between the OGE and chiral field caused interactions, because the former is a short-range interaction from pQCD and the latter describes the medium-range interaction from NPQCD, respectively. Meanwhile, a phenomenological confining potential V^conf is again adopted to account for the long-range interaction from NPQCD. Consequently, the total Hamiltonian of a six-quark system in the chiral SU(3) quark model, it can be given byH=∑_i=1^6 T_i -T_G + ∑_j>i=1^6(V_ij^OGE +V_ij^conf+V_ij^ch),where the T_i and T_G are the kinetic energy operators of the i-th quark and the center of mass motion (CM), respectively, V_ij^α with α= OGE, conf, ch denote the OGE, confinement, and chiral field induced interactions between the i-th and j-th quarks, respectively,V_ij^ch=∑_a=1^8 V_ij^σ_a + ∑_a=1^8 V_ij^π_a.The explicit expressions of these potentials can be found in Ref.<cit.>. In this work, a chiral SU(3) quark model with a linear confining potential is employed to reveal the binding character of the concerned six-quark system with a SCC structure, and corresponding model parameters are listed in Table <ref>,in which the coupling constant g_ch of the chiral field with quarks is determined by the experimental value of the NNπ coupling constant g_NNπ, the coupling constant g_u of the gluon with quarks is fixed by the mass difference between N and Δ, the confining strength a_uu^c of the OGE potential is obtained by satisfying the stability condition of the nucleon (N), the zero-point energy a_uu^c0 is fixed by the mass of N, the masses of the exchanged bosons are chosen from the empirical masses of relevant mesons and by fitting the data of the N-N scattering and the binding energy of the deuteron. These values are exactly the same as those in our previous two channel RGM calculations except the values of a^c_uu and a^c0_uu, because here the quadratic confinement is replaced by a linear one<cit.>. The reason for choosing a linear confining potential is that due to the NPQCD effect, the confining potential prefers a linear form rather than a quadratic one. According to the lattice calculation, it even tends to a color screened form whose strength is weaker than that of the linear one at the larger separation between quarks. Moreover, in the hadronic spectrum study, because the mass scale is about GeV, the NPQCD effect is surely nonnegligible, as a consequence, the spectrum will be sensitive to the form of the confining potential, especially in the SCC calculation. In order to provide a meaningful prediction about the binding behavior of d^*, it is better to take a linear form or even color screened form for the confining potential. It should be specially mentioned that in the nucleon-nucleon case, since the inter-cluster interaction intervenes between two color singlets, the choice of the different confining potential will not cause visible effects in the N-N interaction. Namely, using a linear confining potential instead of a quadratic one will not affect either scattering phase shifts or binding results between nucleons<cit.>. Based on the above reasoning, we can ensure that the model with a linear confining potential still possesses the prediction power.§.§ Wave function Now, we select the trial wave function of the six-quark system in the model space of SCC. Since the ground state of the six-quark system with (IS)=(03) and L=0 in this model space has the symbolic form ofψ_1=((0s)^6[6]_orb[111111]_SFC)_(IS)=(03),where (0s) represents a (0s) orbital wave function of the harmonic oscillator with b_1 being the size parameter, the total orbital wave function has a [6] symmetry, and the total wave function in the spin-flavor-color space has a [111111] symmetry. As shown in Ref. <cit.>, this configuration is not adequate to describe this system, therefore the components with higher excitations should be included. On the other hand, the result in the TCC calculation shows that the CC component in the wave function of d^* has a rather large fraction, about 2/3. By re-organizing such a wave function (refer to the right panel of Fig. <ref> in Ref.<cit.>) to form a ((0s)^6[6]_orb[111111]_SIC)_(IS)=(03) type wave function, one sees that the obtained wave function has a large fraction of about 80% in the total wave function. It implies that the main character of the observed structure by the WASA-at-COSY collaboration should also appear in the SCC calculation. If the structure proposed in our previous TCC calculation is reasonable, the curve in the right panel of Fig. <ref> tells us that one needs at least an additional component of the wave function with one node in radial, namely a (1s) radially excited wave function.Thus, in the lowest order approximation, we could adopt an additional wave function which has 2ħω excitation to supplement the inadequacy of ψ_1 in describing d^*. Now, we pick up all the wave functions which have 2ħω radial excitation((0s)^5(1s)[6]_orb[111111]_SIC)_(IS)=(03),and((0s)^4(0p)^2[6]_orb[111111]_SIC)_(IS)=(03),where (0p) and (1s) denote the orbital wave functions of a quark moving in the (0p) and (1s) orbits, respectively, which also take the harmonic oscillator form with a size parameter of b_2. Their orbital parts can be written explicitly as(0s)^5(1s)[6]_orb = √(1/6)∑_i=1^6[(0s)^5(1s)_i],and(0s)^4(0p)^2[6]_orb = √(1/15)∑_i<j^6[(0s)^4(0p)^2_ij].Their linear combination could form a configuration which orthogonal to ψ_1 and the excited wave function of the center of mass motion (CM), as long as b_1=b_2. Then, the supplemented configuration can be taken asψ_2= [(√(5/6) (0s)^5(1s)[6]_orb.. + .. √(1/6) (0s)^4(0p)^2[6]_orb) [111111]_SIC]_(IS)=(03),where the size parameter b_2 is chosen as a variational parameter (later, b_1 can be varied as well for getting an even more stable solution).Finally, the trial wave function can be expressed asΨ_6q =c_1ψ_1 +c_2 ψ_2,where c_1 and c_2 are the mixing coefficients. It should particularly be emphasized that the ψ_1 and ψ_2 are not orthogonal in general, except b_2=b_1. § RESULTS AND DISCUSSIONS As mentioned in the previous section, because ψ_2 is employed to complement ψ_1, ψ_2 with a 2ħω excitation can be regarded as an inner structural deformation of the concerned six-quark system. Generally, the size of ψ_2 should be larger than ψ_1's, namely b_2>b_1, is required. The eigenvalue problem of the SCC of the six-quark system with given values of b_1 and b_2 should be considered first. Due to non-orthogonality of ψ_1 and ψ_2, a generalized eigenvalue equation, secular equation,∑_j=1^2 ⟨ ψ_i| H |ψ_j ⟩ c_j = E ∑_j=1^2 ⟨ ψ_i|ψ_j ⟩ c_j            (i=1,2)should be solved with certain values of b_1 and b_2, for instance b_1=0.5fm, as in the TCC calculation, and a value greater than 0.5fm for b_2. Changing value of b_2, the obtained eigenvalue becomes a function of b_2. The actual value of b_2 should be achieved by the variational procedure, so that the system would have minimum mass.To ensure the system being even more stable, b_1 should further be regarded as a changeable parameter, namely a two-parameter variation∂^2⟨ Ψ_6q| H |Ψ_6q ⟩/∂ b_1 ∂ b_2 = 0,should be performed. Now, the obtained mass of d^* depends on the size parameters b_1 and b_2. We plot such a dependence in Fig. <ref>.From this figure, one sees that there do exist a stable point with respect to b_1 and b_2 in the b_1-b_2 plane, where b_1=0.5 fm, b_2=0.61 fm, and the mass of the system reaches its minimum value of 2356  MeV. One also finds that although we variate b_1 and b_2 simultaneously, the outcome value for b_1 is very close to its starting value which has been used in the determination of model parameters. It reflects the reliability of the result.For comparison, we also calculate the mass of the system with b_1=0.5fm when the chiral field induced interaction V^ch_ij is absent. The obtained mass is about 2481MeV. A comparison between these two masses shows an implication that by taking into account the potentials induced by the chiral fields, the effect of NPQCD in the medium-range should be reasonably included. As a consequence, the single cluster six-quark system with the orbital [6] symmetry is indeed bound with respect to the threshold of ΔΔ, and the mass of system is close to the experimental data, which is contradict the conclusion in Ref. <cit.> where the important medium-range interaction due to the chiral symmetry consideration is missing. This also means that such a system is likely a hexaquark dominated state.Moreover, we obtain the wave function of the bound state with its coefficients c_1 and c_2 being 0.849 and -0.727, respectively. Using this wave function, we in principle can estimate the decay width of the obtained state d^*. However, the obtained width is too small to explain the data. This is because that the hypothetical trial wave function Ψ_6q could not well approach to the reality of the observed structure due to an improper cut in adopting inner structural deformation wave functions. Actually, we find that the wave functions with a size parameter of 0.5-1.1fm are the most important pieces which would provide the major contribution to the width. However, in order to use a simplest possible model to describe the binding behavior of the system without losing major character, the pieces describing the information of the inner structural deformation with larger size parameters are absent in our hypothetical trial wave function, namely the adopted harmonic oscillator form of the single cluster trial wave function cannot properly describe the real behavior of the system at surface region. A sophisticated study should be carried out further. Nevertheless, our model with a rather simple but meaningful six quark structure, a basic ground state component with the spatial [6] symmetry plus an inner structural deformation component with a 2ħω excitation which is also in the spatial [6] symmetry, still gives the main characters of d^*, in this case, its mass is about 46MeV higher than the Δ Nπ threshold but about 108MeV lower than the ΔΔ threshold, although its mass is about 24MeV smaller than the observed value, and its width is much smaller than the width of two Δ's width of about 230MeV.It should be noted that the 2ħω excited state may have another symmetry structure, i.e., a wave function with a spatial [42] symmetry. However, because the wave functions with spatial [6] and [42] symmetries are orthogonal to each other, and the central force reserves the spatial symmetry [6], inclusion of the [42] symmetry wave function will not affect final result. Therefore, we disregard such a configuration in this preliminary calculation. We also limit our discussion on the states with excited energy higher than 2ħω, due to their larger kinetic energies, and consequently, less influence on the mass of d^*(2380).The mirror state of d^* whose quantum numbers are IS=30 is calculated with the same trial wave function as well. We plot the mass dependence on b_1 and b_2 in Fig. <ref>. The stable point occurs at b_1 = 0.5 fm and b_2=0.61 fm with a mass of 2412MeV, which is around the ΔΔ threshold. § SUMMARY In this paper, the intrinsic binding behavior of the experimentally observed d^*(2380) are studied in the SCC approximation within the chiral SU(3) constituent quark model. For simplicity but keeping major character, the trial wave function is chosen as a combination of a basic ground state component with the spatial [6] symmetry and an inner structural deformation component with a 2ħω excitation which is also in the spatial [6] symmetry. A two-parameter variational calculation is performed in order to reach a stable state where the mass of the system has a minimum value. It is shown that there do exist a stable point, the corresponding b_1 and b_2 are 0.5fm and 0.61fm, respectively, and the energy is about 2356MeV, which is qualitatively consistent with the observed value. This result contradicts that in Ref. <cit.> where the interaction between quarks involves OGE and confinement potentials only. This is because that in the QCD inspired constituent quark model, the mass of the constituent quark comes from the restoration of the the chiral symmetry, thus the chiral symmetry must be considered. As a practical way, the chiral field induced potential which describes the medium-range NPQCD effect should be introduced into the constitute quark model. That is why with our chiral SU(3) constitute quark model, a much lower mass of the six-quark system can be obtained. Moreover, although the estimated decay width of the system does not contradict the data, it is too small to match the observed value, because the hypothetic trial wave function is too simple to describe inner structural deformation of the system especially in the surface region where the contribution from the tail of the wave function of the system dominates the width. In a word, the binding behavior of a single cluster six-quark system with (IS)=(03) is compatible with the result from the RGM calculation qualitatively. This means that the hexaquark dominated picture may be a promising picture for d^*. For completion, the mirror state of d^* is also studied. The mass of this state is about 2412MeV which is around the ΔΔ threshold.ACKNOWLEDGEMENTS We would like to thank Qiang Zhao and H. Clement for their useful and constructive discussions. This project is partly supported by the National Natural Science Foundation of China under Grants Nos. 10975146, 11165005, 11475181, 11475192, 11565007 and 11521505, and the IHEP Innovation Fund under the No. Y4545190Y2, as well as supported by the DFG and the NSFC through funds (11621131001) provided to the Sino-Germen CRC 110 Symmetries and the Emergence of Structure in QCD. Support by the China Postdoctoral Science Foundation under Grant No. 2016M601133 is also appreciated. 99Bashkanov:2008ih M. Bashkanov et al., Double-Pionic Fusion of Nuclear Systems and the ABC Effect: Aproaching a Puzzle by Exclusive and Kinematically Complete Measurements, Phys. Rev. Lett.102, 052301 (2009).Adlarson:2011bh P. Adlarson et al. (WASA-at-COSY Collaboration), ABC Effect in Basic Double-Pionic Fusion — Observation of a new resonance?, Phys. Rev. Lett.106, 242302 (2011).Keleta:2009wb S. Keleta et al., Exclusive measurement of two-pion production in the dd → ^4Heππ reaction, Nucl. Phys. A 825, 71 (2009).Adlarson:2012au P. Adlarson et al. (WASA-at-COSY Collaboration), Abashian-Booth-Crowe resonance structure in the double pionic fusion to ^4He, Phys. Rev. C 86, 032201 (2012).Adlarson:2013usl P. Adlarson et al. (WASA-at-COSY Collaboration), Measurement of the pn → ppπ^0π^- reaction in search for the recently observed resonance structure in dπ^0π^0 and dπ^+π^- systems, Phys. Rev. C 88, 055208 (2013).Adlarson:2014tcn P. Adlarson et al. (WASA-at-COSY Collaboration), Measurement of the np → npπ^0π^0 Reaction in Search for the Recently Observed d^*(2380) Resonance, Phys. Lett. B 743, 325 (2015).Adlarson:2014xmp P. Adlarson et al. (WASA-at-COSY Collaboration), ABC effect and resonance structure in the double-pionic fusion to ^3He, Phys. Rev. C 91, 015201 (2015).Adlarson:2014pxj P. Adlarson et al. (WASA-at-COSY Collaboration), Evidence for a New Resonance from Polarized Neutron-Proton Scattering, Phys. Rev. Lett.112, 202301 (2014).Adlarson:2014ozl P. Adlarson et al. (WASA-at-COSY Collaboration), Neutron-proton scattering in the context of the d^*(2380) resonance, Phys. Rev. C 90, 035204 (2014).Clement:2016vnl H. Clement, "On the History of Dibaryons and their Final Discover", Progress in Particle and Nuclear Physics, 93, 195 (2017). Zhusl:2016 Hua-Xing Chen, Wei Chen, Xiang Liu, Shi-Lin Zhu The hidden-charm pentaquark and tetraquark states, Phys.Rept. 639, 1 (2016).Olsen:2015zcy S. L. Olsen, XYZ Meson Spectroscopy, PoS Bormio 050 (2015).Bashkanov:2013cla M. Bashkanov, S. J. Brodsky and H. Clement, Novel Six-Quark Hidden-Color Dibaryon States in QCD, Phys. Lett. B 727, 438 (2013).Yuan:1999pg X. Q. Yuan, Z. Y. Zhang, Y. W. Yu and P. N. Shen, Deltaron dibaryon structure in chiral SU(3) quark model, Phys. Rev. C 60, 045203 (1999).Dyson:1964xwa F. Dyson and N. H. Xuong, Y=2 States in SU(6) Theory, Phys. Rev. Lett.13, 815 (1964).Gal:2013dca A. Gal and H. Garcilazo, Three-Body Calculation of the Delta-Delta Dibaryon Candidate D_03(2370) at 2.37 GeV, Phys. Rev. Lett.111, 172301 (2013).Gal:2014zia A. Gal and H. Garcilazo, Three-body model calculations of N Δ and ΔΔ dibaryon resonances, Nucl. Phys. A 928, 73 (2014).Huang:2013nba H. Huang, J. Ping and F. Wang, Dynamical calculation of the ΔΔ dibaryon candidates, Phys. Rev. C 89, 034001 (2014). Chen:2014vha H. X. Chen, E. L. Cui, W. Chen, T. G. Steele and S. L. Zhu, QCD sum rule study of the d^*(2380), Phys. Rev. C 91, 025204 (2015). Park:2015nha W. Park, A. Park and S. H. Lee, Dibaryons in a constituent quark model, Phys. Rev. D 92, 014037 (2015). Dai:2005kt L. R. Dai, Delta Delta dibaryon structure in extended chiral SU(3) quark model, Chin. Phys. Lett.22, 2204 (2005).Huang:2014kja F. Huang, Z. Y. Zhang, P. N. Shen and W. L. Wang, Is d^* a candidate for a hexaquark-dominated exotic state?, Chin. Phys. C 39, 071001 (2015).Dong:2015cxa Y. Dong, P. Shen, F. Huang and Z. Zhang, Theoretical study of the d^*(2380) → d ππ decay width, Phys. Rev. C 91, 064002 (2015).Huang:2015nja F. Huang, P. N. Shen, Y. B. Dong and Z. Y. Zhang, Understanding the structure of d^*(2380) in chiral quark model, Sci. China Phys. Mech. Astron.59, 622002 (2016).Dong1Yubing Dong, Fei Huang, Pengnian Shen, and Zongye Zhang, Phys. Rev. C94, 014003 (2016).Dong2Yubing Dong, Fei Huang, Pengnian Shen, and Zongye Zhang, Phys. Lett. B769, 223 (2017). Harvey:1981 M. Harvey, On the Fractional Parentage Expansions of Color Singlet Six Quark States in a Cluster Model, Nucl. Phys. A 352, 301 (1981).GL:1960nc M. Gell-Mann, M. Lévy,(1960) The axial vector current in beta decay, Il Nuovo Cimento, 16,705 (1960).Zhang:1997ny Z. Y. Zhang, Y. W. Yu, P. N. Shen, L. R. Dai, A. Faessler and U. Straub, Hyperon nucleon interactions in a chiral SU(3) quark model, Nucl. Phys. A 625, 59 (1997).Kusainov:1991vn A. M. Kusainov, V. G. Neudatchin and I. T. Obukhovsky, Projection of the six quark wave function onto the N N channel and the problem of the repulsive core in the N N interaction, Phys. Rev. C 44, 2343 (1991).Buchmann:1991cy A. Buchmann, E. Hernandez and K. Yazaki, Gluon and pion exchange currents in the nucleon, Phys. Lett. B 269, 35 (1991).Henley:1990kw E. M. Henley and G. A. Miller, Excess of anti-D over anti-U in the proton sea quark distribution, Phys. Lett. B 251, 453 (1990).Liumsh:2015 Liu Ming-Sheng, Non-strange six-quark states and the structure of Z_c particle in a chiral quark model, Master Dissertation, Inner Mongolia University, China, 2015.
http://arxiv.org/abs/1704.08503v1
{ "authors": [ "Qi-Fang Lü", "Fei Huang", "Yu-Bing Dong", "Peng-Nian Shen", "Zong-Ye Zhang" ], "categories": [ "nucl-th", "hep-ph", "nucl-ex" ], "primary_category": "nucl-th", "published": "20170427110019", "title": "Six-quark structure of $d^*(2380)$ in chiral constituent quark model" }
α β̱ γ̧ δ̣ ϵϕγ η ıι ȷψ κ̨ łλ μν øωω π θþθ ρ̊στ ῠ ξ ζ Δ Φ Γ Ψ ŁΛ ØΩΩ Π Θ Σ Υ Ξ ∂ =α=̱β=γ=̣δ=ϵ=ζ=η=θı=ι=̨κł=λ=μ=ν=ξ=̊ρ=σ=τ=ϕ=̧χ=ψ=ω=Δ=ΘŁ=Λ=Ξ=Υ=Φ=Ψ=Χ=Ω=Γ==<cit.>=<ref>=§= §.§=§.§.§=/======∇=∂= [ #1#2 #1 #2#1#2#1#2∂#1 #1 14mu l i.e.via⋉ strD_max | 0 ⟩ | k ⟩| B (p^+) ⟩1 | B^0 (p^+) ⟩ | B (p^+) ⟩_ | B (x^-) ⟩ ]
http://arxiv.org/abs/1704.08269v1
{ "authors": [ "Marika Taylor", "William Woodhead" ], "categories": [ "hep-th" ], "primary_category": "hep-th", "published": "20170426180138", "title": "Non-conformal entanglement entropy" }
^1Department of Photonics, National Chiao Tung University, Hsinchu 300, Taiwan ^2Institute of Photonics Technologies, National Tsing Hua University, Hsinchu 300, Taiwan^3Institute for Complex Systems, National Research Council (ISC-CNR), Rome00185, Italy^4Department of Physics, University Sapienza, Rome 00185, Italy^5Physics Division, National Center for Theoretical Sciences, Hsinchu 300, TaiwanThe use of geometrical constraints opens many new perspectives in photonics and in fundamental studies of nonlinear waves. By implementing surface structures in vertical cavity surface emitting lasers as manifolds for curved space, we experimentally study the impacts of geometrical constraintson nonlinear wave localization. We observe localized waves pinned to the maximal curvature in an elliptical-ring, and confirm the reduction in the localization length of waves by measuringnear and far field patterns, as well as the corresponding dispersion relation. Theoretically, analyses based on a dissipative model witha parabola curve give good agreement remarkably to experimental measurement on the transition from delocalized to localized waves. The introduction of curved geometry allows to control and design lasing modes in the nonlinear regime.Lasing on nonlinear localized waves in curved geometryKou-Bin Hong^1, Chun-Yan Lin^2, Tsu-Chi Chang^1, Wei-Hsuan Liang^1,Ying-Yu Lai^1,Chien-Ming Wu^2, You-Lin Chuang^2,5, Tien-Chang Lu^1, Claudio Conti^3,4,and Ray-Kuang Lee^2,5 Universität Heidelberg, Institut für Theoretische Physik, Philosophenweg 16, D-69120 Heidelberg =========================================================================================================================================================================================Studies on the effect of geometry on wave propagation can be tracked back to Lord Rayleigh in theearly theory of sound <cit.>, and extend to recent investigation as, for example, analogs of gravity in Bose-Einstein condensates <cit.>, optical event horizon in fiber solitons <cit.>, Anderson localization <cit.>, random lasing <cit.> and celestial mechanics in metamaterials with transformation optics <cit.>. In addition to linear optics in curved space <cit.>, geometrical constraints affect shape preserving wave packets, including localized solitary waves <cit.>, extended Airy beams <cit.>, and shock waves <cit.>. Curvature also triggers localization by trapping the wave in extremely deformed regions <cit.>. The effect of geometry on nonlinear phenomena suggests that a confining structure may alter the conditions for observing spatial or temporal solitary waves, which are due to the nonlinear compensation of linear diffraction, or dispersion effects <cit.>. In general,self-localized nonlinear waves require an extra formation power to bifurcate from the linear modes <cit.>, but geometrical constraints may reduce or cancel this threshold. In this work, we use thevertical cavity surface emitting lasers (VCSELs) as a platform and fabricate a series of curved surfaces, including circular-ringand elliptical-ring cavities,as well as a reference “cold" device with no trapping potential.By introducing symmetry-breaking in geometry, i.e., passing from a circular-ring to an elliptical one,we realize experimentallylasing on nonlinear localized waves when the ellipticity of the ring is larger than a critical value <cit.>. With different curved geometries, we measure the corresponding lasing characteristics both in the near and far fields,as well as the dispersion relation, toverify the localized modes pinned at the maximal curvature. With state-of-the-art semiconductor technologies, microcavities in VCSELs have allowed small mode volumes and ultrahigh qualify factors, for applications from sensors, optical interconnects, printers, optical storages, and quantum chaos <cit.>.Our results clearly illustrate that with a large curvature in geometry, through aneffectively stronger confinement, one can access thenonlinear waves easily to furnish a new generation for nonlinear-mode lasers operating at a power level comparable to linear devices. Self-focusing Kerr nonlinearity supports stationary localized waves, namedcrescent wavespinned to the boundary of a given curvature <cit.>.In the case of a symmetric annular ring <cit.>, some of us reported a random localization in the azimuthal direction, observed above a threshold formation power and bifurcating from the linear modes.By introducing a symmetry-breaking in geometry,as in the case of an elliptical-ring considered below, the interplay of nonlinearity and geometry modifies the localization position, localization length, and the operation current. Here, we fabricate a series of electrically driven VCSELs with or without a predefined surface potential, as shown in Fig. 1. The epitaxial VCSEL structure is grown on an n^+GaAs substrate using metal organic chemical vapor deposition systems.The top and bottom distributed Bragg reflectors (DBR) constructed by 17 and 36 pairs of Al_0.1Ga_0.9As/Al_0.9Ga_0.1As bilayers, respectively.The one-lambda cavity consists of n- and p-cladding layers and undoped triple multiple quantum wells (MQWs). The thickness of quantum well was about 8 nm and the quantum well was capped by an 8 nm Al_0.3Ga_0.7As barrier. Modulation doping is used in the mirror layers to reduce the differential resistance while maintaining a low free-carrier absorption loss.The 30 μm (in diameter) emission window is defined by the oxide aperture (AlO_x); while a SiN_x protection layer is deposited on p-DBR toavoid the unwanted oxidation of epi-layer. After the conventional fabrication process,we employ the focus ion beam etching process to define the required curved structure, as shown in the Fig. 1(b) and (c). Here, a circular-ring oran elliptical-ring potential with a different ellipticityis used as the curve geometry, with the etching depth at 0.6 μm. For the circular-ring potential, the inner and outer radius are 7 and 8 μm, respectively, as the top view of scanning electron microscope (SEM) image shown in Fig. 1(b).Effectively, such a curved geometry provides a confinement along the azimuthal direction <cit.>.With the help of this geometrical potential, only when the ellipticty of an elliptical ring, i.e., the ratio between major and minor axes of an elliptical ring, is large enough (greater than 1.5 in our samples), one can significantly reduce the localization length of supported solutions along the curvature, as well as its threshold power in bifurcation. Instead, when the ellipticity is not large enough, the required nonlinearity (injection current) to have wave strongly localized is very high (not shown here). Here, To reduce the bifurcation threshold, the inner and outer radius of semi-major axis are designed as 9 and 11 μm, respectively; while the inner and outer radius of semi-minor axis are 5.5 and 7.5 μm, respectively. The corresponding ellipticity, i.e., the ratio between the major and minor axes, is 1.5.The elliptical-ring provides an effective refractive index potential modulation for the localization of crescent wave.We also fabricatea cold cavity, which is used as a reference sample, as shown in Fig. 1(d).At room temperature, near field images of output emission are collected into a charge-coupled device (CCD) camera using a 20X objective lens with a numerical aperture of 0.42. Figure 2(a-f) show near fieldimages below and above the threshold currents, i.e., the first and second rows, for the three surface potentials in Fig. 1(b-d). Below threshold currents,the spontaneous emission patterns follow the cavity shape, which is modulated by the surface index potential, as shown in Fig. 2(a-c), for the circular-ring, elliptical-ring, and cold cavities, respectively. In the lasing regime, for the cold cavity, a whispering-gallery mode (WGM) is excited (Fig. 2(f)), and is usually observed in VCSEL with large aperture <cit.>. Figure 2(d) shows the radiation pattern for the circular-ring structure composed by various spots due to the azimuthal instability <cit.> .Notably, a strongly localized wave appears at the semi-major axis of elliptical-ring in Fig. 2(e), which clearly corresponds to the maximal curvature. Here, as seen from the non-uniform radiation pattern in Fig. 2(b),device imperfections are believed to be the reasons to cause wave localization in the left. In addition, Fig. 2(g-i) shows the corresponding output intensity versus current,L-I curve, revealing the absence of difference in threshold current for the elliptical-ring potential and the reference sample, i.e., about 6 mA. The measured lasing spectra are also shown in the insets accordingly, which reveal that all of them have the samelasing wavelengths at 848 nm, but with a slightly different line-widths (all smaller than 1 nm).In term of the circumference, denoted as Ω(μm), we report the measured intensity distribution in the near field images in our electrically driven GaAs-based VCSELs inFig. 3(a) and (b), respectively. For the circular-ring potential, as shown in Fig. 3(a), an almost uniform distribution can be clearly seen when operated below the threshold current I < I_th; while some oscillations in the intensity distribution can be found when operated above the threshold current I > I_th.As mentioned above, it is the azimuthal instability that modulates the intensity distribution along the circumference.However, for the elliptical-ring potential, as shown in Fig. 3(b), the intensity distribution changes from a uniform one when operated below the threshold, to a localized peak at Ω = 27 μm.In order to give a quantitative characterization, by averaging the intensity distribution U(Ω) with the normalization to its maximum value U_max, we introduce a localization factor L_m as the ratio to the whole circumference, i.e., L_m≡1/L_C∮_L_CU(Ω)/U_max dΩ, where L_C is the circumference in the curved potential. By this definition, asmall value of localization factor means a stronger localization; while a larger localization factor, up to one,means the wave is fully extended. With this localization factor, L_m, we show in Fig. 3(c) for different normalized injection currents, I/I_th. Clearly, one can seethat for a circular-ring potential, the localized factor remains almost the same, L_m≈ 0.9 to 0.82, no matter we operateat below and above thethreshold currents. However, for the elliptical-ring, a sharp reduction in the localization factor can be found, from L_m≈ 0.78 (≈ 42 μm) to 0.21 (≈ 10 μm), when the injected current is larger than the threshold condition. This strong reduction in the localization factor demonstrates that anelliptical-ring potential really induces the localization of lasing modes. In addition to the near field intensity distribution shown in Fig. 2, we also perform the far field intensity measurements for different curved potentials. Experiment setup for far field image measurement is similar to that of near field.Light radiated from the sample is focused at the Fourier plane of a 20X objective lens,and an additional lens is used to project the Fourier image into the CCD.It is known that above the threshold currents, the corresponding far field pattern is revealed in a symmetric form bothfor the circular-ring and cold cavity (WGM modes) <cit.>, as shown in Fig. 4(a) and 4(c), respectively. On the contrary, as shown in Fig. 4(b), the far field pattern for the wave localized in the semi-major axis of an elliptical-ring becomes an elongated one,which reflects the symmetry-breaking in the near field.Moreover, we also perform the angle resolved electroluminescence (AREL) spectra by measuring the energy at different angles, which also gives us the corresponding dispersion relation, E-k, in the system.As for our AREL measurement system, it is composed by collimator, fiber (with the core radius of 600 μm), rotary stage, and spectrometer.A collimator is placed 8 cm from the sample, connected to the fiber, and rotated by using a rotary stage.The spectrometer has the resolution 0.7, and resolution of rotation angle is 1^∘.The AREL spectra with different angles can beconverted into the E-k curve through the expression between in-plane wavevector k and rotation angle θ, i.e.,k=(2π/λ)sinθ, with the wavelength of light λ.Below the threshold current, both the circular-ring cavity and cold cavityhave similar parabolic profiles in their E-k curves, as shown in the insets. Above the threshold current, as shown in Fig. 4(d) and 4(f), the output signals in the E-k curve locate around ± 10^∘, but keep the parabolic profile for these two cavities. With mode-selective mirror reflectivities, similar far field patterns located away from the center 0^∘ are also reportedin single fundamental mode AlGaAs VCSELs <cit.>. Nevertheless, in the inset of Fig. 4(e), we reveal the E-k curve for an elliptical-ring potential, which possesses a band structure with a smaller curvature at the bottom of angles within ± 10^∘.Compared to the other two cavities, a significant modification on the output dispersion relation can be seen for the elliptical-ring potential. In order to theoretically study the effect on curved manifolds on dissipartive nonlinear wave localization, we employ the curvilinear coordinate for small angles around the localization (Ω≅ 0). As the dashed curve shown in the inset of Fig. 5(a), we approximate the elliptical potential shape by a parabola k q^2, with the curvature k= a/2 b^2.Here, the semi-major and semi-minor axes of the elliptical-ring are denoted asa and b, respectively. To describe the semiconductor microcavity, we adopt the following reduced dissipative wave equation when the system reaches equilibrium <cit.>:α∂_t+ θ - (α+i)[ -(1+η)+2C(I-1)/1+||^2] + (α-i d) ∇_⊥^2=0.In Eq. (<ref>),gives the slowly varying complex envelope of electric field (scaled to the saturation value), C is the saturable absorption coefficient scaled to the resonator transmission, ∇_⊥^2 is the transverse Laplacian describing the diffraction in the paraxial approximation, η is the linear absorption coefficient due to the gain material, θ is the generalized cavity detuning, andd is the diffusion constant of the carrier scaled to the diffraction coefficient. External injection current is denoted by I, normalized to threshold current I_th. We assumethe corresponding line-width enhancement factor α is large enough (α≫ 1) <cit.>, and deal with a simplified dissipative model for the laser mode in the two transverse coordinates <cit.>∂_t=-∇_⊥^2 + 2C(I-1)/1+||^2 .We then look for a stationary solutions ∂_t =0 and transform Eq. (3) with the curvilinear coordinate q along the path of curved potentials <cit.>:-d^2 /d η^2+V_G(η)= γ/1+||^2,with γ=-2C(I-1) R^2 serving as the“nonlinear eigenvalue”, and the radius of curvature Rscaled in η=q/R. In Eq. (<ref>),the local curvature furnishes the geometrical potential V_G(η). Specifically, by approximating the profile of the laser by a parabola y=k x^2, being k=1/2R and ξ=x/R,we have V_G (η) =-1/4 [1+ ξ(η)^2 ] ^3, along with η =1/2ξ√(1+ξ^2)+1/2sinh^-1(ξ) for ξ(η).The supported bound states, i.e., localized modes, can be found as the solution ofEq. (4), which can be viewed asa nonlinear eigenvalue equation, with the injection current (I-1) embedded in γ. With the comparison to wave packet ψ in a one-dimensional potential trap, i.e., -ψ_xx + V_G(x)ψ = γ ψ,this negative eigenvalue γ = - 2C(I-1)R^2 plays the role as the bounding energy in a trapping potential, (γ < 0) to support a localized mode. In the linear regime,as numerically calculated, we have the lowest eigenvalue γ≅ -0.1 with an exponentially localized ground state at q=0, ψ∝exp(-|q| √(-γ)), and localization length L_linear≅ 4 R <cit.>. Operation above the threshold current (I-1) > 0 is also needed to guarantee the existence of bound states, as C > 0.We then consider the nonlinear regime with gain saturation, and solve Eq. (<ref>) numerically by increasing the overall energy of field. Any bound state solution corresponds to the case: γ<0 and injection current I=1+|γ|/(2C R^2). One can see that the bounding energy is divided by a factor of 1+||^2, which means that the nonlinearity helps to compensate the required bond energy. Profiles of solutions are shown in Fig. 5(a) for linear case, and for two nonlinear cases, with γ = -0.1 and -0.2.Moreover, in Fig. 5(b), we show the localization length L_loc as a function of the injection current calculated by the inverse participation ratio: L_loc (I)=R (∫ ||^2 dη)^2/∫ ||^4 dη.Note that the localization length reduces first and asymptotically approaches a constant value when we increase the injection current I in the laser because of nonlinearity. An increase in the injection current I, corresponds to a lower nonlinear eigenvalue γ=-2C R^2 (I-1).One can clearly see thatthe localization length is significantly reduced for a shorter radius of curvature. By expressing quantities in real world units, we have a=10 μm and b=6 μm in our experiments. With a comparison to the data shown in Fig. 3(c),the localization length estimated to be reduced from 50 μm to 10 μm, which is in reasonable good agreement with the experimental data (from 42 μm to 10 μm).In conclusion, we have demonstrated experimentally that a proper use of geometrical constraints allows lasing on nonlinear waves at a comparable threshold current.Specifically, curvature and nonlinearity together pin the wave localization with a reduction in the localization length. The comparison of lasing modes in different curved potentials in VCSELs providesan extraordinary evidence of this subtle interplay between nonlinearity and geometry, and near and far field measurements support our findings.Our results constitute the demonstration a significant transition from delocalized to localized state by bounds and open several new ways for the excitationof low-threshold nonlinear states and dynamics for light emission and information processing.Remarkable developments in this research direction not only include nonlinear optics but may involve fields as Bose-Einstein condensation and quantum optics.§ ACKNOWLEDGMENTThis work has been supported by Ministry of Science and Technology,Taiwan, under Contracts NSC 102-2221-E-009-156-MY3.CC acknowledges support from CNR-MOST initiative and the Templeton foundation (grant number 58277). Authors acknowledge Prof. H. C. Kuo of National Chiao Tung University for measurement systems, and Prof. Y. S. Kivshar of the Australian National University for useful discussions. 99Rayleigh J. W. S. Rayleigh,The theory of sound, (Macmillan, London, 1894).gravity-BEC C. Barcelo, S. Liberati, and M. Visser, “Probing semiclassical analog gravity in Bose-Einstein condensates with widely tunable interactions,” Phys. Rev. A 68, 053613 (2003).optical-horizon Th. G. Philbin, C. Kuklewicz, S. Robertson, S. Hill, F. Konig, U. Leonhardt, “Fiber-optical analog of the event horizon,” Science319, 1367 (2008).optical-horizon2 D. Faccio, T. Arane, M. Lamperti, and U. Leonhardt, “Optical black hole lasers,” Class. Quantum Grav.29, 224009 (2012). conti-1 C. Conti, “Linear and nonlinear Anderson localization in a curved potential,” Chin. Phys. Lett. 31, 30501 (2014).ghof N. Ghofraniha, I. Viola, A. Zacheo, V. Arima, G. Gigli, and C. Conti “Transition from nonresonant to resonant random lasers by the geometrical confinement of disorder,” Opt. Lett. 38, 5043 (2013).transformation-1 U. Leonhardt,“Optical conformal mapping," Science 312, 1777 (2006).transformation-2 J. B. Pendry, D. Schurig, and D. R. Smith,“Controlling electromagnetic fields," Science 312, 1780 (2006).celestial D. A. Genov, S. Zhang, and X. Zhang, “Mimicking celestial mechanics in metamaterials,” Nature Phys. 5, 687 (2009).michinel A. Paredes, and H. Michinel, “Interference of dark matter solitons and galactic offsets,” Phys. Dark. Univ. 12, 50 (2016).optics-curved S. Batz, and U. Peschel, “Linear and nonlinear optics in curved space,” Phys. Rev. A 78, 043821 (2008)soliton-curved S. Batz, and U. Peschel, “Solitons in curved space of constant curvature,” Phys. Rev. A 81, 053806 (2010). accelerating-curved R. Bekenstein, J. Nemirovsky, I. Kaminer, and M. Segev, “Shape-preserving accelerating electromagnetic wave packets in curved space,” Phys. Rev. X 4, 011038 (2014)wwan W. Wan, S. Jia, and J. W. Fleisher,“Dispersive superfluid-like shock waves in nonlinear optics,” Nature Phys. 3, 46 (2007).conti-2 C. Conti, “Localization and shock waves in curved manifolds for the Gross-Pitaevskii equation,” Sci. Bull. 61, 570 (2016).optics-curved-exp V. H. Schultheiss, S. Batz, A. Szameit, F. Dreisow, S. Nolte, A. Tunnermann,S. Longhi, and U. Peschel, “Optics in curved space,” Phys. Rev. Lett. 105, 143901 (2010).Yuri-book Yu. S. Kivshar and G. P. Agrawal, Optical Solitons: from Fibers to Photonic Crystals, (Academic Press, San Diego, 2003).bifurcation A. Sacchetti, “Universal Critical Power for Nonlinear Schrödinger Equations with a Symmetric Double Well Potential,” Phys. Rev. Lett. 103, 194101 (2009).OL-Kuo K.-H. Kuo, Y.Y. Lin, and R.-K. Lee, “Thresholdless crescent waves in an elliptical ring,"Opt. Lett. 38, 1077 (2013). VCSEL-1 R. K. Chang and A. J. Campillo, Eds., Optical Processes in Microcavities, (World Scientific, Singapore, 1996).nature-soliton S. Barland, J. R. Tredicce, M. Brambilla, L. A. Lugiato, S. Balle, M. Giudici, T. Maggipinto, L. Spinelli, G. Tissoni, T. Knodl, M. Miller, and R. Jager, “Cavity solitons as pixels in semiconductor microcavities,” Nature 419, 699 (2002).VCSEL-2 K. J. Vahala, Ed., Optical Microcavities, (World Scientific, Singapore, 2004).soliton-laser Y. Tanguy, T. Ackemann, W. J. Firth,and R. Jager, “Realization of a semiconductor-based cavity soliton laser,” Phys. Rev. Lett. 100, 013907 (2008).chaos T.-D. Lee, C.-Y. Chen, Y.Y. Lin, M.-C. Chou, T.-h. Wu, and R.-K. Lee, “Surface-structure-assisted chaotic mode lasing in vertical cavity surface emitting lasers,” Phys. Rev. Lett. 101, 084101 (2008). Kartashov2009 Y. V. Kartashov, V. A. Vysloukh, and L. Torner, “Rotating surface solitons," Opt. Lett. 32, 2948 (2007) .PRL-Jisha C.P. Jisha, Y.Y. Lin, T.-D. Lee, and R.-K. Lee, “Crescent waves in optical cavities," Phys. Rev. Lett. 107, 183902 (2011).mirror A. Kroner, F. Rinaldi, J. M. Ostermann, R. Michalzik, “High-performance single fundamental mode AlGaAs VCSELs with mode-selective mirror reflectivities,” Opt. Comm. 270, 332 (2007).OL-YY Y. Y. Lin and R.-K. Lee,“Symmetry-breaking instabilities of generalized elliptical solitons,” Opt. Lett. 33, 1377 (2008). soliton-device M. Brambilla, L. A. Lugiato, F. Prati, L. Spinelli, and W. J. Firth, “Spatial soliton pixels in semiconductor devices,” Phys. Rev. Lett. 79,2042 (1997).OL-VCSEL W.-X. Yang, Y.Y. Lin, T.-D. Lee, R.-K. Lee, and Yu. S. Kivshar, “Nonlinear localized modes in bandgap microcavities,” Opt. Lett. 35, 3207 (2010). a3 C. H. Henry, R. A. Logan, and K. A. Bertness, “Spectral dependence of the change in refractive index due to carrier injection in GaAs lasers,” J. Appl. Phys. 52, 4457 (1981).a2 G. R. Olbright, R. P. Bryan, W. S. Fu, R. Apte, D. M. Bloom, and Y. H. Lee, “Linewidth, tunability, and VHF-millimeter wave frequency synthesis of vertical-cavity GaAs quantum-well surface-emitting laser diode arrays,” IEEE Photon. Technol. Lett. 3, 779 (1991).a1 D. V. Kuksenkov, H. Temkin, andS. Swirhun, “Frequency modulation characteristics of gain-guided AlGaAs/GaAs vertical-cavity surface-emitting lasers,” Appl. Phys. Lett. 66, 3239 (1995).
http://arxiv.org/abs/1704.08026v1
{ "authors": [ "Kou-Bin Hong", "Chun-Yan Lin", "Tsu-Chi Chang", "Wei-Hsuan Liang", "Ying-Yu Lai", "Chien-Ming Wu", "You-Lin Chuang", "Tien-Chang Lu", "Claudio Conti", "Ray-Kuang Lee" ], "categories": [ "physics.optics" ], "primary_category": "physics.optics", "published": "20170426091540", "title": "Lasing on nonlinear localized waves in curved geometry" }
Crystal and Magnetic Structures in Layered, Transition Metal Dihalides and Trihalides Michael A. McGuire December 30, 2023 ===================================================================================== Wit is a form of rich interaction that is often grounded in a specific situation (e.g., a comment in response to an event).In this work, we attempt to build computational models that can produce witty descriptions for a given image.Inspired by a cognitive account of humor appreciation, we employ linguistic wordplay, specifically puns, in image descriptions.We develop two approaches which involve retrieving witty descriptions for a given image from a large corpus of sentences, or generating them via an encoder-decoder neural network architecture. We compare our approach against meaningful baseline approaches via human studies and show substantial improvements. We find that when a human is subject to similar constraints as the model regarding word usage and style, people vote the image descriptions generated by our model to be slightly wittier than human-written witty descriptions.Unsurprisingly, humans are almost always wittier than the model when they are free to choose the vocabulary, style, etc. § INTRODUCTION “Wit is the sudden marriage of ideas which before their union were not perceived to have any relation.”– Mark Twain Witty remarks are often contextual, i.e., grounded in a specific situation.Developing computational models that can emulate rich forms of interaction like contextual humor, is a crucial step towards making human-AI interaction more natural and more engaging <cit.>.E.g., witty chatbots could help relieve stress and increase user engagement by being more personable and human-like.Bots could automatically post witty comments (or suggest witty responses) on social media, chat, or messaging.The absence of large scale corpora of witty captions and the prohibitive cost of collecting such a dataset (being witty is harder than just describing an image) makes the problem of producing contextually witty image descriptions challenging. In this work, we attempt to tackle the challenging task of producing witty (pun-based) remarks for a given (possibly boring) image.Our approach is inspired by a two-stage cognitive account of humor appreciation <cit.> which states that a perceiver experiences humor when a stimulus such as a joke, captioned cartoon, etc., causes an incongruity, which is shortly followed by resolution. We introduce an incongruity in the perceiver's mind while describing an image by using an unexpected word that is phonetically similar (pun) to a concept related to the image. E.g., in Fig. <ref>, the expectations of a perceiver regarding the image (bear, stones, etc.) is momentarily disconfirmed by the (phonetically similar) word `bare'. This incongruity is resolved when the perceiver parses the entire image description.The incongruity followed by resolution can be perceived to be witty.[Indeed, a perceiver may fail to appreciate wit if the process of `solving' (resolution) is trivial (the joke is obvious) or too complex (they do not `get' the joke).] We build two computational models based on this approach to produce witty descriptions for an image.First, a model that retrieves sentences containing a pun that are relevant to the image from a large corpus of stories <cit.>.Second, a model that generates witty descriptions for an image using a modified inference procedure during image captioning which includes the specified pun word in the description. Our paper makes the following contributions:To the best of our knowledge, this is the first work that tackles the challenging problem of producing a witty natural language remark in an everyday (boring) context. We present two novel models to produce witty (pun-based) captions for a novel (likely boring) image. Our models rely on linguistic wordplay. They use an unexpected pun in an image description during inference/retrieval. Thus, they do not require to be trained with witty captions.Humans vote the descriptions from the top-ranked generated captions `wittier' than three baseline approaches. Moreover, in a Turing test-style evaluation, our model's best image description is found to be wittier than a witty human-written caption[ This data is available on the author's webpage. ] 55% of the time when the human is subject to the same constraints as the machine regarding word usage and style.§ RELATED WORKHumor theory. General Theory of Verbal Humor <cit.> characterizes linguistic stimuli that induce humor but implementing computational models of it requires severely restricting its assumptions <cit.>. Puns. zwicky1986imperfect classify puns as perfect (pronounced exactly the same) or imperfect (pronounced differently). Similarly, pepicello1984language categorize riddles based on the linguistic ambiguity that they exploit – phonological, morphological or syntactic. Kao2013-kx formalize the notion of incongruity in puns and use a probabilistic model to evaluate the funniness of a sentence.  Jaech2016PhonologicalP learn phone-edit distances to predict the counterpart, given a pun by drawing from automatic speech recognition techniques. In contrast, we augment a web-scraped list of puns using an existing model of pronunciation similarity. Generating textual humor. JAPE <cit.> also uses phonological ambiguity to generate pun-based riddles.While our task involves producing free-form responses to a novel stimulus, JAPE produces stand-alone “canned” jokes.HAHAcronym <cit.> generates a funny expansion of a given acronym. Unlike our work, HAHAcronym operates on text, and is limited to producing sets of words. UnsupervisedJokeBigData develop an unsupervised model that produces jokes of the form, “I like my X like I like my Y, Z” .Generating multi-modal humor. meme predict a meme's text based on a given funny image. Similarly, shahaf2015inside and radev2015humor learn to rank cartoon captions based on their funniness. Unlike typical, boring images in our task, memes and cartoons are images that are already funny or atypical. E.g., “LOL-cats” (funny cat photos), “Bieber-memes” (modified pictures of Justin Bieber), cartoons with talking animals, etc. chandrasekaran2015we alter an abstract scene to make it more funny. In comparison, our task is to generate witty natural language remarks for a novel image. Poetry generation. Although our tasks are different, our generation approach is conceptually similar to Ghazvininejad2016GeneratingTP who produce poetry, given a topic.While they also generate and score a set of candidates, their approach involves many more constraints and utilizes a finite state acceptor unlike our approach which enforces constraints during beam search of the RNN decoder. § APPROACH Extracting tags.The first step in producing a contextually witty remark is to identify concepts that are relevant to the context (image).At times, these concepts are directly available as e.g., tags posted on social media.We consider the general case where such tags are unavailable, and automatically extract tags associated with an image.We extract the top-5 object categories predicted by a state-of-the-art Inception-ResNet-v2 model <cit.> trained for image classification on ImageNet <cit.>.We also consider the words from a (boring) image description (generated from vinyals2016show).We combine the classifier object labels and words from the caption (ignoring stopwords) to produce a set of tags associated with an image, as shown in Fig. <ref>.We then identify concepts from this collection that can potentially induce wit.Identifying puns.We attempt to induce an incongruity by using a pun in the image description.We identify candidate words for linguistic wordplay by comparing image tags against a list of puns. We construct the list of puns by mining the web for differently spelled words that sound exactly the same (heterographic homophones).We increase coverage by also considering pairs of words with 0 edit-distance, according to a metric based on fine-grained articulatory representations (AR) of word pronunciations <cit.>. Our list of puns has a total of 1067 unique words (931 from the web and 136 from the AR-based model). The pun list yields a set of puns that are associated with a given image and their phonologically identical counterparts, which together form the pun vocabulary for the image. We evaluate our approach on the subset of images that have non-empty pun vocabularies (about 2 in 5 images). Generating punny image captions.We introduce an incongruity by forcing a vanilla image captioning model <cit.> to decode a phonological counterpart of a pun word associated with the image, at a specific time-step during inference (e.g., `sell' or `sighed', showed in orange in Fig. <ref>).We achieve this by limiting the vocabulary of the decoder at that time-step to only contain counterparts of image-puns.In following time-steps, the decoder generates new words conditioned on all previously decoded words.Thus, the decoder attempts to generate sentences that flow well based on previously uttered words. We train two models that decode an image description in forward (start to end) and reverse (end to start) directions, depicted as `fRNN' and `rRNN' in Fig. <ref> respectively. The fRNN can decode words after accounting for the incongruity that occurs early in the sentence and the rRNN is able to decode the early words in the sentence after accounting for the incongruity that can occur later.The forward RNN and reverse RNN generate sentences in which the pun appears in each of the first T and last T positions, respectively.[For an image, we choose T={1,2,...,5} and beam size = 6 for each decoder. This generates a pool of 5 (T) * 6 (beam size) * 2 (forward + reverse decoder) =60 candidates. ]Retrieving punny image captions. As an alternative to our approach of generating witty remarks for the given image, we also attempt to leverage natural, human-written sentences which are relevant (yet unexpected) in the given context. Concretely, we retrieve natural language sentences[To prevent the context of the sentence from distracting the perceiver, we consider sentences with < 15 words. Overall, we are left with a corpus of about 13.5 million sentences.] from a combination of the Book Corpus <cit.> and corpora from the NLTK toolkit <cit.>. The retrieved sentences each (a) contains an incongruity (pun) whose counterpart is associated with the image, and (b) has support in the image (contains an image tag). This yields a pool of candidate captions that are perfectly grammatical, a little unexpected, and somewhat relevant to the image (see Sec. <ref>). Ranking. We rank captions in the candidate pools from both generation and retrieval models, according to their log-probability score under the image captioning model. We observe that the higher-ranked descriptions are more relevant to the image and grammatically correct. We then perform non-maximal suppression, i.e., eliminate captions that are similar[Two sentences are similar if the cosine similarity between the average of the Word2Vec <cit.> representations of words in each sentence is ≥ 0.8. ] to a higher-ranked caption to reduce the pool to a smaller, more diverse set. We report results on the 3 top-ranked captions. We describe the effect of design choices in the supplementary.§ RESULTS Data.We evaluate witty captions from our approach via human studies. 100 random images (having associated puns) are sampled from the validation set of COCO <cit.>. Baselines. We compare the wittiness of descriptions generated by our model against 3 qualitatively different baselines, and a human-written witty description of an image. Each of these evaluates a different component of our approach. Regular inference generates a fluent caption that is relevant to the image but is not attempting to be witty. Witty mismatch is a human-written witty caption, but for a different image from the one being evaluated. This baseline results in a caption that is intended to be witty, but does not attempt to be relevant to the image. Ambiguous is a `punny' caption where a pun word in the boring (regular) caption is replaced by its counterpart. This caption is likely to contain content that is relevant to the image, and it contains a pun. However, the pun is not being used in a fluent manner. We evaluate the image-relevance of the top witty caption by comparing against a boring machine caption and a random caption (see supplementary).Evaluating annotations. Our task is to generate captions that a layperson might find witty. To evaluate performance on this task, we ask people on Amazon Mechanical Turk (AMT) to vote for the wittier among the given pair of captions for an image. We collect annotations from 9 unique workers for each relative choice and take the majority vote as ground-truth. For each image, we compare each of the generated 3 top-ranked and 1 low-ranked caption against 3 baseline captions and 1 human-written witty caption.[This results in a total of 4 (captions) *2 (generation + retrieval) *4 (baselines + human caption) = 32 comparisons of our approach against baselines.We also compare the wittiness of the 4 generated captions against the 4 retrieved captions (see supplementary) for an image (16 comparisons).In total, we perform 48 comparisons per image, for 100 images.]Constrained human-written witty captions.We evaluate the ability of humans and automatic methods to use the given context and pun words to produce a caption that is perceived as witty.We ask subjects on AMT to describe a given image in a witty manner.To prevent observable structural differences between machine and human-written captions, we ensure consistent pun vocabulary (utilization of pre-specified puns for a given image). We also ask people to avoid first person accounts or quote characters in the image. Metric. <ref>, we report performance of the generation approach using the Recall@K metric. For K=1,2,3, we plot the percentage of images for which at least one of the K `best' descriptions from our model outperformed another approach. Generated captions vs. baselines.As we see in Fig. <ref>, the top generated image description (top-1G) is perceived as wittier compared to all baseline approaches more often than not (the vote is >50% at K=1).We observe that as K increases, the recall steadily increases, i.e., when we consider the top K generated captions, increasingly often, humans find at least one of them to be wittier than captions produced by baseline approaches.People find the top-1G for a given image to be wittier than mismatched human-written image captions, about 95% of the time. The top-1G is also wittier than a naive approach that introduces ambiguity about 54.2% of the time. When compared to a typical, boring caption, the generated captions are wittier 68% of the time. Further, in a head-to-head comparison, the generated captions are wittier than the retrieved captions 67.7% of the time.We also validate our choice of ranking captions based on the image captioning model score. We observe that a `bad' caption, i.e., one ranked lower by our model, is significantly less witty than the top 3 output captions. Surprisingly, when the human is constrained to use the same words and style as the model, the generated descriptions from the model are found to be wittier for 55% of the images.Note that in a Turing test, a machine would equal human performance at 50%[Recall that this compares how a witty description is constructed, given the image and specific pun words. A Turing test-style evaluation that compares the overall wittiness of a machine and a human would refrain from constraining the human in any way.].This led us to speculate if the constraints placed on language and style might be restricting people's ability to be witty. We confirmed this by evaluating free-form human captions. Free-form Human-written Witty Captions. We ask people on AMT to describe an image (using any vocabulary) in a manner that would be perceived as funny. As expected, when compared against automatic captions from our approach, human evaluators find free-form human captions to be wittier about 90% of the time compared to 45% in the case of constrained human witty captions. Clearly, human-level creative language with unconstrained sentence length, style, choice of puns, etc., makes a significant difference in the wittiness of a description. In contrast, our automatic approach is constrained by caption-like language, length, and a word-based pun list. Training models to intelligently navigate this creative freedom is an exciting open challenge.Qualitative analysis. The generated witty captions exhibit interesting features like alliteration (`a bare black bear ...') in Fig. <ref> and <ref>. At times, both the original pun (pole) and its counterpart (poll) make sense for the image (Fig. <ref>). Occasionally, a pun is naively replaced by its counterpart (Fig. <ref>) or rare puns are used (Fig. <ref>).On the other hand, some descriptions (Fig. <ref> and <ref>) that are forced to utilize puns do not make sense. See supplementary for analysis of retrieval model. § CONCLUSION We presented novel computational models inspired by cognitive accounts to address the challenging task of producing contextually witty descriptions for a given image. We evaluate the models via human-studies, in which they significantly outperform meaningful baseline approaches. § ACKNOWLEDGEMENTS1011 We thank Shubham Toshniwal for his advice regarding the automatic speech recognition model.This work was supported in part by:a NSF CAREER award, ONR YIP award, ONR Grant N00014-14-12713, PGA Family Foundation award, Google FRA, Amazon ARA, DARPA XAI grant to DPand NVIDIA GPU donations, Google FRA, IBM Faculty Award, and Bloomberg Data Science Research Grant to MB.1212Appendix We first present details and results for the additional experiments that evaluate the relevance of the top generated caption from the model to the image, and compare the retrieved captions to baseline approaches. We then briefly discuss the rationales for the design choices that we made in our approach. Subsequently, we briefly describe some characteristics of pun words and follow it up with brief qualitative analysis of the output from the retrieval model. Finally, we present the annotation and evaluation interfaces that the human subjects interacted with. § ADDITIONAL EXPERIMENTS§.§ Relevance of witty caption to image We compared the relative relevance of the top witty caption from our generation approach against a machine generated boring caption (either for the same image or for a different, randomly chosen image) in a pairwise comparison. We showed Turkers an image and a pair of captions, and asked them to choose the more relevant caption for the image. We see that on average, the generated witty caption is considered more relevant than a machine generated boring caption for the same image 37.5% of the time. People found the generated witty caption to be more relevant than a random caption 97.2% of the time. This shows that in an effort to generate witty content, our approach produces descriptions that are a little less relevant compared to a boring description for the image. But our witty caption is clearly still relevant to the image (almost always more relevant than an unrelated caption). §.§ Retrieved captions vs. baselines Humans evaluate the wittiness of each of the 3 top-ranked retrieved captions against baseline approaches and a human witty caption. As we see in Fig. <ref>, at K=1, the topretrieved description is found to be wittier than only a human-written witty caption that is mismatched with the given image (witty mismatch) 83.8% of the time. The top retrieved caption is found less witty than even a typical caption (regular inference) about 63.4% of the time. Similarly, the retrieved caption is also found to be less witty than a naive method that produces punny captions (ambiguous) about 62% of the time. We observe the trend that as K increases, recall also increases. On average, at least one of the top 3 retrieved captions is wittier than the (constrained) human witty caption about 61.6% of the time, compared to generated captions which are wittier 84.0% of the time.Poor performance of retrieved captions could be due to the fact that they are often not perfectly apt for the given image since they are retrieved from story-based corpora. Please see Sec. <ref> for examples, and a more detailed discussion.As we will see in the next section, these issues do not extend to the generation approach which exhibits strong performance against baseline approaches, human-written witty captions and the retrieval approach.While these captions might evoke a sense of incongruity, it is likely hard for the viewer to resolve the alternate interpretation of the retrieved caption as being applicable to the image. § DESIGN CHOICESIn this section, we describe how our architecture design and parameter choices in the architectures influence witty descriptions.During the design of our model, we made choices of parameters based on observations from qualitative results.For instance, we experimented with different beam sizes to generate a set of high precision captions with few false positives. We found that a beam size of 6 resulted in a sufficient number of candidate sentences which were reasonably accurate.We extract image tags from the top-K predictions of an image classifier. We experimented with different values of K, where K ∈{1,5,10}. We also tried using a score threshold, where classes predicted with a score above the threshold were considered valid image tags. We found that K=5 results in reasonable predictions. Determining a reasonable threshold on the other hand was difficult because for most images, class prediction scores are extremely peaky.We also experimented with the different positions that a pun counterpart can be forced to appear in. Based on qualitative examples, we found that the model generated witty descriptions that were somewhat sensible when a pun word appeared at any of the first or last 5 positions of a sentence. We also experimented with a number of different methods to re-rank the candidate of witty captions, e.g., language model score <cit.>, image-sentence similarity score <cit.>, semantic similarity (using Word2Vec <cit.>) of the pun counterpart to the sentence, a priori probability of the pun counterpart in a large corpus of English sentences to avoid rare / unfamiliar words, likelihood of the tag (under the image captioning model or the classifier as applicable). etc. We qualitatively found that re-ranking using log. prob. score of the image captioning model, while being the simplest, resulted in the best set of candidate witty captions. § PUN LISTRecall that we construct a list of puns by mining the web and based on automatic methods that measure the similarity of pronunciation of words.Upon inspecting our list of puns, we observe that it contains puns of many frequently used words and some pun words that are rarely used in everyday language, e.g., `wight' (which is the counterpart of `white').Since a rare pun word can be distracting to a perceiver, the corresponding caption might be harder to resolve, making it less likely to be perceived as witty.Thus, we see limited benefit in increasing the size of our pun list further to include words that are used even less frequently.§ QUALITATIVE ANALYSIS OF RETRIEVED DESCRIPTIONSThe retrieved witty descriptions are retrieved from story-based corpora. They often contain sentences that describe a very specific situation or instance. Although these sentences are grounded in objects that are also present in the image, the entire sentence often contains a few words that are irrelevant for a given image, as we see in Fig. <ref>, Fig. <ref> and Fig. <ref>. This is a likely reason for why a retrieved sentence containing a pun is perceived as less witty when compared with witty descriptions generated for the image.§ INTERFACE FOR `BE WITTY!'We ask people on Amazon Mechanical Turk (AMT) to create witty descriptions for the given image. We also ask them to utilize one of the given pun words associated with the image. We show them a few good and bad examples to illustrate the task better. Fig. <ref> shows the interface that we used to collect these human-written witty descriptions for an image.§ INTERFACE FOR `WHICH IS WITTIER?'We showed people on AMT two descriptions for a given image and asked them to click on the description that was wittier for the image. The web interface that we used to collect this data is shown in Fig. <ref>.
http://arxiv.org/abs/1704.08224v2
{ "authors": [ "Arjun Chandrasekaran", "Devi Parikh", "Mohit Bansal" ], "categories": [ "cs.CL", "cs.AI", "cs.CV" ], "primary_category": "cs.CL", "published": "20170426172253", "title": "Punny Captions: Witty Wordplay in Image Descriptions" }
Local discontinuous Galerkin methods for the time tempered fractional diffusion equation Fengqun Zhao label1 December 30, 2023 ========================================================================================grafThe paper discusses an advanced level information system to support educational, research and scientific activities of the Department “Electrophysical Facilities” (DEF) of the National Research Nuclear University “MEPhI” (NRNU MEPhI), which is used for training of specialists of the course “Physics of Charged Particle Beams and Accelerator Technology”.§ INTRODUCTIONIn the development of modern IT, one of the most important and urgent tasks is the development of specialized information resources to support science and education in various subject areas, taking into account their specific features. The specifics of a specific subject area requires a serious adaptation of standard computer technologies commonly used in the scientific and educational departments of universities to the specific conditions and challenges.After years of implementation of IT in educational and scientific activity at the NRNU MEPhI DEF, an open modular informational scientific-educational resource (INOR) “Electrophysics” has been developed, intended for information support in solving scientific-practical tasks and training of specialists in the field of physics of charged particle beams and accelerator technology.The INOR “Electrophysics” supports several levels of access (Fig. 1):* Standalone (or local) access from LAN computer classes of the DEF;* Corporate access from the classroom, campus and corporate networks of NRNU MEPhI;* Global access through Internet. The local computer network of the Computing Laboratory of the DEF includes a computer classroom and a server with a network switch and a gateway to the Internet. Windows and GNU/Linux are used as operating systems. The Apache Web Server supports the operation of information website (www.accel.ru) and e-teaching portal (edu.accel.ru).An informational website provides a remote access to the thematic content and specialized cross-platform web applications - simulation subsystems of the DEF used during the laboratory workshops.E-teaching portal was created on the basis of the Virtual Teaching System Moodle. Protection of information resources from unauthorized access is provided through access control separately for students, teachers, and researchers.§ THE CONCEPT OF INOR “ELECTROPHYSICS” The concept of INOR “Electrophysics” is based on the principles and technologies of e-teaching and open education with the aim of providing information support of educational and scientific activities of the DEF in the training of specialists in the field of physics of charged particle beams and accelerator technology, and it can be considered in two aspects: * Object-oriented hierarchical (associated with hierarchy of thematic structuring of information resources);* Functional (associated with the information processes in applications implementing functions of the information system). The levels of the hierarchy of the object-oriented aspect are shown in Fig. 2. The first level of the hierarchy of the thematic structure of the information system of INOR “Electrophysics” is associated with the distribution of academic disciplines within educational and scientific cycles, defining the main directions of training of specialists in the physics of charged particle beams and accelerator technology area: * Accelerators of charged particles;* Microwave engineering;* Physical electronics;* Electronic systems of accelerators;* Information systems of accelerators. The second level is associated with the description of the structure and technologies of the educational process for an individual training module of each training cycle, including training programs, semester schedules, content associated with lectures, practical laboratory classes and self-study. Educational content consists of thematically structured blocks, including e-teaching materials in text and graphical forms (electronic books, textbooks and lecture notes in PDF, DJVU formats, Power Point presentation, etc), as well as video and audio recordings of lectures, practical classes (including webinars) and educational videos.The third level describes the structure of the tools monitoring of learning progress in the education formats described in the second level. This structure includes various forms of control such as tests, evaluation funds to monitor academic progress, as well as content which contains control questions and tasks both in standard text formats, and in the form of specialized applications for testing knowledge.Functional or technological aspect is connected with the formalization of algorithms, models and structures of data used in the educational process, and then the inclusion of these formal models in the information environment, a core structure of which is the common database, and associated applications that implement specific functions. Thus, for each functional application, several types of interfaces for different categories of users relevant to their roles and functions in the information system (Fig. 3) are developed. We can distinguish three main categories (roles) of users in INOR “Electrophysics”:* Administrators – monitor relevance of curricula and schedules, as well as reports on current and final certification and the level of access. They manage individual user accounts, form student groups and assign teachers to them, as well as control the level of user access to information resources. * Teachers - manage the learning process, determine learning objectives in accordance with the curricula, define implementation criteria, form task lists for their student groups, a list of textbooks and references on the teaching content, monitor the assignments implementation and final certification of students.* Students - perform training tasks, having the ability to save the results for further work with them or send them for automated testing or for checks by the tutor, prior to current and, ultimately, final certification. § VIRTUAL LABORATORIES OF ELECTROPHYSICS The core of INOR “Electrophysics” are the virtual laboratories based on computer simulation subsystems of charged particles accelerators. The following virtual laboratories have been currently developed and used: * “The channels of transportation of high-energy particles”;* “Electronic systems of accelerators”;* “Vacuum technique”;* “High-power pulse technology”. Computer models of subsystems of charged particles accelerators for training students specializing in the field of physics of charged particle beams and accelerator technology have been used at the MEPhI DEF since 1975. This action was forced by circumstances related to the difficulties of implementing full-scale simulation of such subsystems at the teaching laboratories:* Laboratory research require unique and expensive equipment;* A real subsystem of the electrophysical facilities are of considerable size and high requirements for electrical, electromagnetic, radiation safety;* Real experiments take quite a long time and very often are incompatible with the planned schedule of the classes. In this regard, computer modeling and simulation of elements and subsystems work of electrophysical facilities proved promising, and in some cases is the only possible way to do researches, configure, and design of such devices, partially compensating the lack of real facilities.Despite the fact that the mathematical models developed for simulation of virtual laboratories, were successfully used in creation of a specialized CAD system for designing subsystems of high-current charged particles accelerators, the concept of virtual lab differs from the concept of CAD. The virtual laboratory is not a tool of direct design, and it is aimed at studying the physical processes in the specified scheme devices under study.The virtual laboratory allows the assembly of diagrams of the studied devices on the computer monitor with mouse clicks, but the user has to deal with already assembled investigated circuit (Fig. 4), in which it is possible to change only some elements at fixed positions. Otherwise, it works as a CAD software – used to configure the scheme settings, and the result is visualized in graphs of the studied processes (Fig. 5).Currently, the virtual laboratories have been implemented as cross-platform web application (HTML, CSS, and JavaScript), using AJAX and CGI technologies. In the future, it is planned to implement the concept of Cloud Computing, with allocating computing resources dynamically to each running application.§ CONCLUSION The development of INOR “Electrophysics” has allowed to conduct a technological audit of the software, and to develop a common conceptual framework for the development and use of information systems in the organization of training of specialists in the “Physics of charged particle beams and accelerator technology” area.High rates of development of modern IT extend the functionalities of the development, increasing the productivity of developers due to new instrumental software. On the other hand, for the same reasons, the developed software quickly becomes outdated, and, actually, its constant modernization is required. The way out of this situation is to create universal cross-platform Web applications that retain relevance when changing the hardware platform.Distance teaching serves as an excellent additional tool for the empowerment of independent work of students. The introduction of systems of remote training allows improving the manageability and efficiency of the educational process. Performing laboratory workshops at the virtual labs of INOR “Electrophysics”, students have the opportunity to gain significant practical experience required for designing of subsystems of charged particles accelerators in the future.Prospects for the development of INOR “Electrophysics” on the one hand are connected with the increasing of performance of modern computers that allows improving the accuracy of modeling the physical processes. On the other hand, empowerment of software tools of web development will allow developing universal cross-platform applications for distance teaching, having an adaptive interface. This creates a possibility for remote access to teaching tools using not only desktops with widescreen monitors or laptops, but also mobile devices, contributing to the empowerment of the self-study everywhere (with Internet access) at any time. 99 b1 Averyanov G P et al. 2014 Virtual electrophysics laboratories (20th International Workshop on Beam Dynamics and Optimization, BDO, Article number 6890001) b2 Averyanov G P et al. 2014 Virtual Laboratory of Vacuum Technique (Proceedings of RuPAC 2014, Obninsk, Kaluga Region, Russia) pp 110 - 112 b3 Averyanov G P et al. 2015 Modelling and optimization in an object-oriented environment "Electrophysics (International conference “Stability and Control Processes” is dedicated to the 85th anniversary of V.I. Zubov Isbn 978-5-9907101-1-5, Saint Petersburg State University) pp 283 - 284 b4 Averyanov G P, et al. 2012 On the development of scientific and educational information environment "Electrophysics" (Moscow: Proc. Sientific session MEPhI-2012) pp 15 - 16
http://arxiv.org/abs/1704.07894v1
{ "authors": [ "G. P. Averyanov", "V. V. Dmitrieva", "N. P. Kornev", "A. A. Fadeev" ], "categories": [ "cs.CY", "physics.acc-ph" ], "primary_category": "cs.CY", "published": "20170426063727", "title": "Information Scientific and Educational Resource \"Electrophysics\"" }
Sixteen years of X-ray monitoring of Sagittarius A*: Evidence for a decay of the faint flaring rate from 2013 August,13 months before a rise in the bright flaring rate Enmanuelle Mossoux <ref>,<ref> Nicolas Grosso <ref> Received/ Accepted =========================================================================================================================================================================== Mild Cognitive Impairment (MCI) is a mental disorder difficult to diagnose. Linguistic features, mainly from parsers, have been used to detect MCI, but this is not suitable for large-scale assessments. MCI disfluencies produce non-grammatical speech that requires manual or high precision automatic correction of transcripts.In this paper, we modeled transcripts into complex networks and enriched them with word embedding (CNE) to better represent short texts produced in neuropsychological assessments. The network measurements were applied with well-known classifiers to automatically identify MCI in transcripts, in a binary classification task. A comparison was made with the performance of traditional approaches using Bag of Words (BoW) and linguistic features for three datasets: DementiaBank in English, and Cinderella and Arizona-Battery in Portuguese. Overall, CNE provided higher accuracy than using only complex networks, while Support Vector Machine was superior to other classifiers. CNE provided the highest accuracies for DementiaBank and Cinderella, but BoW was more efficient for the Arizona-Battery dataset probably owing to its short narratives. The approach using linguistic features yielded higher accuracy if the transcriptions of the Cinderella dataset were manually revised. Taken together, the results indicate that complex networks enriched with embedding is promising for detecting MCI in large-scale assessments. § INTRODUCTION Mild Cognitive Impairment (MCI) can affect one or multiple cognitive domains (e.g. memory, language, visuospatial skills and executive functions), and may represent a pre-clinical stage of Alzheimer's disease (AD). The impairment that affects memory, referred to as amnestic MCI, is the most frequent, with the highest conversion rate for AD, at 15% per year versus 1 to 2% for the general population. Since dementias are chronic and progressive diseases, their early diagnosis ensures a greater chance of success to engage patients in non-pharmacological treatment strategies such as cognitive training, physical activity and socialization <cit.>.Language is one of the most efficient information sources to assess cognitive functions. Changes in language usage are frequent in patients with dementia and are normally first recognized by the patients themselves or their family members. Therefore, the automatic analysis of discourse production is promising in diagnosing MCI at early stages, which may address potentially reversible factors <cit.>. Proposals to detect language-related impairment in dementias include machine learning <cit.>,magnetic resonance imaging<cit.>, and data screening tests added to demographic information <cit.>. Discourse production (mainly narratives) is attractive because it allows the analysis of linguistic microstructures, including phonetic-phonological, morphosyntactic and semantic-lexical components, as well as semantic-pragmatic macrostructures.Automated discourse analysis based on Natural Language Processing (NLP) resources and tools to diagnose dementias via machine learning methods has been used for English language <cit.> and for Brazilian Portuguese <cit.>. A variety of features are required for this analysis, including Part-of-Speech (PoS), syntactic complexity, lexical diversity and acoustic features. Producing robust tools to extract these features is extremely difficult because speech transcripts used in neuropsychological evaluations contain disfluencies (repetitions, revisions, paraphasias) and patient's comments about the task being evaluated. Another problem in using linguistic knowledge is the high dependence on manually created resources, such as hand-crafted linguistic rules and/or annotated corpora. Even when traditional statistical techniques (Bag of Words or ngrams) are applied, problems still appear in dealing with disfluencies, because mispronounced words will not be counted together. Indeed, other types of disfluencies (repetition, amendments, patient's comments about the task) will be counted, thus increasing the vocabulary.An approach applied successfully to several areas of NLP <cit.>, which may suffer less from the problems mentioned above, relies on the use of complex networks and graph theory.The word adjacency network model <cit.> has provided good results in text classification <cit.> and related tasks, namely author detection <cit.>, identification of literary movements <cit.>, authenticity verification <cit.> and word sense discrimination <cit.>.In this paper, we show that speech transcripts (narratives or descriptions) can be modeled into complex networks that are enriched with word embedding in order to better represent short texts produced in these assessments. When applied to a machine learning classifier, the complex network features were able to distinguish between control participants and mild cognitive impairment participants. Discrimination of the two classes could be improved by combining complex networks with linguistic and traditional statistical features.With regard to the task of detecting MCI from transcripts, this paper is, to the best of our knowledge, the first to: a) show that classifiers using features extracted from transcripts modeled into complex networks enriched with word embedding present higher accuracy than using only complex networks for 3 datasets; and b) show that for languages that do not have competitive dependency and constituency parsers to exploit syntactic features, e.g. Brazilian Portuguese, complex networks enriched withwordembedding constitute a source to extract new, language independent features from transcripts. § RELATED WORKDetection of memory impairment has been based on linguistic, acoustic, and demographic features, in addition to scores of neuropsychological tests. Linguistic and acoustic features were used to automatically detect aphasia <cit.>; and AD <cit.> or dementia <cit.> in the public corpora of DementiaBank[<talkbank.org/DementiaBank/>]. Other studies distinguished different types of dementia <cit.>, in which speech samples were elicited using the Picnic picture of the Western Aphasia Battery <cit.>. <cit.> also used the Picnic scene to detect MCI, where the subjects were asked to write (by hand) a detailed description of the scene.As for automatic detection of MCI in narrative speech, <cit.> extracted speech features and linguistic complexity measures of speech samples obtained with the Wechsler Logical Memory (WLM) subtest <cit.>, and <cit.> fully automatized the WLM subtest. In this test, the examiner tells a short narrative to a subject, who then retells the story to the examiner, immediately and after a 30-minute delay. WLM scores are obtained by counting the number of story elements recalled.<cit.> and <cit.> used short animated films to evaluate immediate and delayed recalls in MCI patients who were asked to talk about the first film shown, then about their previous day, and finally about another film shown last. <cit.> adopted automatic speech recognition (ASR) to extract a phonetic level segmentation, which was used to calculate acoustic features. <cit.> used speech, morphological, semantic, and demographic features collected from their speech transcripts to automatically identify patients suffering from MCI. For the Portuguese language, machine learning algorithms were used to identify subjects with AD and MCI. <cit.> used a variety of linguistic metrics, such as syntactic complexity, idea density <cit.>, and text cohesion through latent semantics. NLP tools with high precision are needed to compute these metrics, which is a problem for Portuguese since no robust dependency or constituency parsers exist. Therefore, the transcriptions had to be manually revised; they were segmented into sentences, following a semantic-structural criterion and capitalization was applied. The authors also removed disfluencies and inserted omitted subjects when they were hidden, in order to reduce parsing errors. This process is obviously expensive, which has motivated us to use complex networks in the present study to model transcriptions and avoid a manual preprocessing step. § MODELING AND CHARACTERIZING TEXTS AS COMPLEX NETWORKS The theory and concepts of complex networks have been used in several NLP tasks <cit.>, such as text classification <cit.>, summarization <cit.> and word sense disambiguation <cit.>. In this study, we used the word co-occurrence model (also called word adjacency model) because most of the syntactical relations occur among neighboring words <cit.>.Each distinct word becomes a node and words that are adjacent in the text are connected by an edge. Mathematically, a network is defined as an undirected graph G = {V, E}, formed by a set V = {v_1, v_2, ..., v_n} of nodes (words) and a set E = {e_1, e_2, ..., e_m} of edges (co-occurrence) that are represented by an adjacency matrix A, whose elements A_ij are equal to 1 whenever there is an edge connecting nodes (words) i and j, and equal to 0 otherwise.Before modeling texts into complex networks, it is often necessary to do some preprocessing in the raw text. Preprocessing starts with tokenization where each document/text is divided into tokens (meaningful elements, e.g., words and punctuation marks) and then stopwords and punctuation marks are removed, since they have little semantic meaning. One last step we decided to eliminate from the preprocessing pipeline is lemmatization, which transforms each word into its canonical form. This decision was made based on two factors. First, a recent work has shown that lemmatization has little or no influence when network modeling is adopted in related tasks <cit.>. Second, the lemmatization process requires part-of-speech (POS) tagging that may introduce undesirable noises/errors in the text, since the transcriptions in our work contain disfluencies. Another problem with transcriptions in our work is their size. As demonstrated by <cit.>, classification of small texts using networks can be impaired, since short texts have almost linear networks, and the topological measures of these networks have little or no information relevant to classification. To solve this problem, we adapted the approach of inducing language networks from word embeddings, proposed by <cit.> to enrich the networks with semantic information. In their work, language networks were generated from continuous word representations, in which each word is represented by a dense, real-valued vector obtained by training neural networks in the language model task (or variations, such as context prediction) <cit.>. This structure is known to capture syntactic and semantic information. <cit.>, in particular, take advantage of word embeddings to build networks where each word is a vertex and edges are defined by similarity between words established by the proximity of the word vectors.Following this methodology, in our model we added new edges to the co-occurrence networks considering similarities between words, that is, for all pairs of words in the text that were not connected, an edge was created if their vectors (from word embedding) had a cosine similarity higher than a given threshold.Figure <ref> shows an example of a co-occurrence network enriched by similarity links (the dotted edges).The gain in information by enriching a co-occurrence network with semantic information is readily apparent in Figure  <ref>.§ DATASETS, FEATURES AND METHODS§.§ DatasetsThe datasets[All datasets are made available in the same representations used in this work, upon request to the authors.] used in our study consisted of: (i) manually segmented and transcribed samples from the DementiaBank and Cinderella story and (ii) transcribed samples of Arizona Battery for Communication Disorders of Dementia (ABCD) automatically segmented into sentences, since we are working towards a fully automated system to detect MCI in transcripts and would like to evaluate a dataset which was automatically processed. The DementiaBank dataset is composed of short English descriptions, while the Cinderella dataset contains longer Brazilian Portuguese narratives. ABCD dataset is composed of very short narratives, also in Portuguese. Below, we describe in further detail the datasets, participants, and the task in which they were used.§.§.§ The Cookie Theft Picture Description Dataset The clinical dataset used for the English language was created during a longitudinal study conducted by the University of Pittsburgh School of Medicine on Alzheimer’s and related dementia, funded by the National Institute of Aging. To be eligible for inclusion in the study, all participants were required to be above 44 years of age, have at least 7 years of education, no history of nervous system disorders nor be taking neuroleptic medication, have an initial Mini-Mental State Exam (MMSE) score of 10 or greater, and be able to give informed consent. The dataset contains transcripts of verbal interviews with AD and related Dementia patients, including those with MCI (for further details see <cit.>).We used 43 transcriptions with MCI in addition to another 43 transcriptions sampled from 242 healthy elderly people to be used as the control group. Table <ref> shows the demographic information for the two diagnostic groups. For this dataset, interviews were conducted in English and narrative speech was elicited using the Cookie Theft picture <cit.> (Figure <ref> from <cit.> in Section <ref>). During the interview, patients were given the picture and were told to discuss everything they could see happening in the picture. The patients’ verbal utterances were recorded and then transcribed into the CHAT (Codes for the Human Analysis of Transcripts) transcription format <cit.>.We extracted the word-level transcript patient sentences from the CHAT files and discarded the annotations, as our goal was to create a fully automated system that does not require the input of a human annotator. We automatically removed filled pauses such as uh, um , er , and ah(e.g. uh it seems to be summer out), short false starts (e.g. just t the ones ), and repetition (e.g. mother's finished certain of the the dishes ), as in <cit.>. The control group had an average of 9.58 sentences per narrative, with each sentence having an average of 9.18 words; while the MCI group had an average of 10.97 sentences per narrative, with 10.33 words per sentence in average.§.§.§ The Cinderella Narrative Dataset The dataset examined in this study included 20 subjects with MCI and 20 normal elderly control subjects, as diagnosed at the Medical School of the University of São Paulo (FMUSP). Table <ref> shows the demographic information of the two diagnostic groups, which were also used in <cit.>.The criteria used to diagnose MCI came from <cit.>. Diagnostics were carried out by amultidisciplinary team consisting of psychiatrists, geriatricians, neurologists, neuropsychologists, speech pathologists, and occupational therapists, by a criterion of consensus. Inclusion criteriafor the control group were elderlies with no cognitive deficits and preservation of functional capacity in everyday life. The exclusion criteria for the normal group were: poorly controlled clinical diseases, sensitive deficits that were not being compensated for and interfered with the performance in tests, and other neurological or psychiatric diagnoses associated with dementia or cognitive deficits and use of medications in doses that affected cognition.Speech narrative samples were elicited by having participants tell the Cinderella story; participants were given as much time as they needed to examine a picture book illustrating the story (Figure <ref> in Section <ref>). When each participant had finished looking at the pictures, the examiner asked the subject to tell the story in their own words, as in <cit.>. The time was recorded, but there was no limit imposed to the narrative length. If the participant had difficulty initiating or continuing speech, or took a long pause, an evaluator would use the stimulus question “What happens next ?”, seeking to encourage the participant to continue his/her narrative. When the subject was unable to proceed with the narrative, the examiner asked if he/she had finished the story and had something to add. Each speech sample was recorded and then manually transcribed at the word level following theNURC/SP N. 338 EF and 331 D2 transcription norms[<albertofedel.blogspot.com.br/2010_11_01_archive.html>].Other tests were applied after the narrative, in the following sequence: phonemic verbal fluency test, action verbal fluency, Camel and Cactus test <cit.>, and Boston Naming test <cit.>, in order todiagnose the groups.Since our ultimate goal is to create a fully automated system that does not require the input of a human annotator, we manually segmented sentences to simulate a high-quality ASR transcript with sentence segmentation, and we automatically removed the disfluencies following the same guidelines of TalkBank project. However, other disfluencies (revisions, elaboration, paraphasias and comments about the task) were kept. The control group had an average of 30.80 sentences per narrative, and each sentence averaged 12.17 words. As for the MCI group, it had an average of 29.90 sentences per narrative, and each sentence averaged 13.03 words.We also evaluated a different version of the dataset used in <cit.>, where narratives were manually annotated and revised to improve parsing results. The revision process was the following: (i) in the original transcript, segments with hesitations or repetitions of more than one word or segment of a single word were annotated to become a feature and then removed from the narrative to allow the extraction of features from parsing; (ii) empty emissions, which were comments unrelated to the topic of narration or confirmations, such as “né” (alright), were also annotated and removed; (iii) prolongations of vowels, short pauses and long pauses were also annotated and removed; and (iv) omitted subjects in sentences were inserted. In this revised dataset, the control group had an average of 45.10 sentences per narrative, and each sentence averaged 8.17 words. The MCI group had an average of 31.40 sentences per narrative, with each sentence averaging 10.91 words.§.§.§ The ABCD Dataset The subtest of immediate/delayed recall of narratives of the ABCD battery was administered to 23 participants with a diagnosis of MCI and 20 normal elderly control participants, as diagnosed at the Medical School of the University of São Paulo (FMUSP).MCI subjects produced 46 narratives while the control group produced 39 ones. In order to carry out experiments with a balanced corpus, as with the previous two datasets, we excluded seven transcriptions from the MCI group. We used the automatic sentence segmentation method referred to as DeepBond <cit.> in the transcripts.Table <ref> shows the demographic information. The control group had an average of 5.23 sentences per narrative, with 11 words per sentence on average, and the MCI group had an average of 4.95 sentences per narrative, with an average of 12.04 words per sentence.Interviews were conducted in Portuguese and the subject listened to the examiner read a short narrative. The subject then retold the narrative to the examiner twice: once immediately upon hearing it and again after a 30-minute delay <cit.>. Each speech sample was recorded and then manually transcribed at the word level following the NURC/SP N. 338 EF and 331 D2 transcription norms.§.§ FeaturesFeatures of three distinct natures were used to classify the transcribed texts: topological metrics of co-occurrence networks, linguistic features and bag of words representations.§.§.§ Topological Characterization of Networks Each transcription was mapped into a co-occurrence network, and then enriched via word embeddings using the cosine similarity of words. Since the occurrence of out-of-vocabulary words is common in texts of neuropsychological assessments, we used the method proposed by <cit.> to generate word embeddings. This method extends the skip-gram model to use character-level information, with each word being represented as a bag of character n-grams. It provides some improvement in comparison with the traditional skip-gram model in terms of syntactic evaluation <cit.> but not for semantic evaluation.Once the network has been enriched, we characterize its topology using the following ten measurements: * PageRank: is a centrality measurement that reflects the relevance of a node based on its connections to other relevant nodes <cit.>; * Betweenness: is a centrality measurement that considers a node as relevant if it is highly accessed via shortest paths. The betweenness of a node v is defined as the fraction of shortest paths going through node v; * Eccentricity: of a node is calculated by measuring the shortest distance from the node to all other vertices in the graph and taking the maximum;* Eigenvector centrality: is a measurement that defines the importance of a node based on its connectivity to high-rank nodes; * Average Degree of the Neighbors of a Node: is the average of the degrees of all its direct neighbors; * Average Shortest Path Length of a Node: is the average distance between this node and all other nodes of the network; * Degree: is the number of edges connected to the node; * Assortativity Degree: or degree correlation measures the tendency of nodes to connect to other nodes that have similar degree; * Diameter: is defined as the maximum shortest path; * Clustering Coefficient: measures the probability that two neighbors of a node are connected. Most of the measurements described above are local measurements, i.e. each node i possesses a value X_i, so we calculated the average μ(X), standard deviation σ(X) and skewness γ(X) for each measurement <cit.>.§.§.§ Linguistic Features Linguistic features for classification of neuropsychological assessments have been used in several studies <cit.>. We used the Coh-Metrix[<cohmetrix.com>]<cit.> tool to extract features from English transcripts,resulting in 106 features. The metrics are divided into eleven categories: Descriptive, Text EasabilityPrincipal Component, Referential Cohesion, Latent Semantic Analysis(LSA), Lexical Diversity, Connectives, Situation Model, Syntactic Complexity, Syntactic Pattern Density, Word Information, and Readability (Flesch Reading Ease, Flesch-Kincaid Grade Level,Coh-Metrix L2 Readability).For Portuguese, Coh-Metrix-Dementia <cit.> was used. The metrics affected by constituency and dependency parsing were not used because they are not robust with disfluencies. Metrics based on manual annotation (such as proportion short pauses, mean pause duration, mean number of empty words, and others) were also discarded.The metrics of Coh-Metrix-Dementia are divided into twelve categories: Ambiguity, Anaphoras, Basic Counts, Connectives, Co-reference Measures, Content Word Frequencies, Hypernyms, Logic Operators, Latent Semantic Analysis, Semantic Density, Syntactical Complexity, and Tokens. The metrics used are shown in detail in Section <ref>. In total, 58 metrics were used, from the 73 available on the website[<http://143.107.183.175:22380>].§.§.§ Bag of Words The representation of text collections under the BoW assumption (i.e., with no information relating to word order) has been a robust solution for text classification. In this methodology, transcripts are represented by a table in which the columns represent the terms (or existing words) in the transcripts and the values represent frequency of a term in a document. §.§ Classification Algorithms In order to quantify the ability of the topological characterization of networks, linguistic metrics and BoW features were used to distinguish subjects with MCI from healthy controls. We employed four machine learning algorithms to induce classifiers from a training set. These techniques were the Gaussian Naive Bayes (G-NB), k-Nearest Neighbor (k-NN), Support Vector Machine (SVM), linear and radial bases functions (RBF), and Random Forest (RF). We also combined these classifiers through ensemble and multi-view learning. In ensemble learning, multiple models/classifiers are generated and combined using a majority vote or the average of class probabilities to produce a single result <cit.>.In multi-view learning, multiple classifiers are trained in different feature spaces and thus combined to produce a single result. This approach is an elegant solution in comparison to combining all features in the same vector or space, for two main reasons. First, combination is not a straightforward step and may lead to noise insertion since the data have different natures.Second, using different classifiers for each feature space allows for different weights to be given for each type of feature, and these weights can be learned by a regression method to improve the model. In this work, we used majority voting to combine different feature spaces.§ EXPERIMENTS AND RESULTS All experiments were conducted using the Scikit-learn[<http://scikit-learn.org>] <cit.>, with classifiers evaluated on the basis of classification accuracy i.e. the total proportion of narratives which were correctly classified. The evaluation was performed using 5-fold cross-validation instead of the well-accepted 10-fold cross-validation because the datasets in our study were small and the test set would have shrunk, leading to less precise measurements of accuracy. The threshold parameter was optimized with the best values being 0.7 in the Cookie Theft dataset and 0.4 in both the Cinderella and ABCD datasets. We used the model proposed by <cit.> with default parameters (100 dimensional embeddings, context window equal to 5 and 5 epochs) to generate word embedding. We trained the models in Portuguese and English Wikipedia dumps from October and November 2016 respectively.The accuracy in classification is given in Tables <ref> through <ref>. CN, CNE, LM, and BoWdenote, respectively, complex networks, complex network enriched with embedding, linguistic metrics and Bag of Words, and CNE-LM, CNE-BoW, LM-BoW and CNE-LM-BoW refer to combinations of the feature spaces (multiview learning), using the majority vote. Cells with the “–” sign mean that it was not possible to apply majority voting because there were two classifiers. The last line represents the use of an ensemble of machine learning algorithms, in which the combination used was the majority voting in both ensemble and multiview learning.In general, CNE outperforms the approach using only complex networks (CN), while SVM(LinearorRBFkernel) provides higher accuracy than other machine learning algorithms. The results for the three datasets show that characterizing transcriptions into complex networks is competitive with other traditional methods, such as the use of linguistic metrics.In fact, among the three types of features, using enriched networks (CNE) provided the highest accuracies in two datasets (Cookie Theft and original Cinderella). For the ABCD dataset, which contains short narratives, the small length of the transcriptions may have had an effect, since BoW features led to the highest accuracy. In the case of the revised Cinderella dataset, segmented into sentences and capitalized as reported in <cit.>, Table <ref> shows that the manual revision was an important factor, since the highest accuracies were obtained with the approach based on linguistic metrics (LM). However, this process of manually removing disfluencies demands time; therefore it is not practical for large-scale assessments.Ensemble and multi-view learning were helpful for the Cookie Theft dataset, in which multi-view learning achieved the highest accuracy (65% of accuracy for narrative texts, a 3% of improvement compared to the best individual classifier). However, neither multi-view or ensemble learning enhanced accuracy in the Cinderella dataset, where SVM-RBF with CNE space achieved the highest accuracy (65%).For the ABCD dataset, multi-view CNE-LM-BoW with SVM-RBF and KNN classifiers improved the accuracy to 4% and 2%, respectively. Somewhat surprising were the results of SVM with linear kernel in BoW feature space (75% of accuracy).§ CONCLUSIONS AND FUTURE WORK In this study, we employed metrics of topological properties of CN in a machine learning classification approach to distinguish between healthy patients and patients with MCI. To the best of our knowledge, these metrics have never been used to detect MCI in speech transcripts; CN were enriched with word embeddings to better represent short texts produced in neuropsychological assessments. The topological properties of CN outperform traditional linguistic metrics in individual classifiers’ results.Linguistic features depend on grammatical texts to present good results, as can be seen in the results of the manually processed Cinderella dataset (Table <ref>). Furthermore, we found that combining machine and multi-view learning can improve accuracy. The accuracies found here are comparable to the values reported by other authors, ranging from 60% to 85%<cit.>, which means that it is not easy to distinguish between healthy subjects and those with cognitive impairments. The comparison with our results is not straightforward, though, because the databases used in the studies are different. There is a clear need for publicly available datasets to compare different methods, which would optimize the detection of MCI in elderly people.In future work, we intend to explore other methods to enrich CN, such as the Recurrent Language Model, and use other metrics to characterize an adjacency network. The pursuit of these strategies is relevant because language is one of the most efficient information sources to evaluate cognitive functions, commonly used in neuropsychological assessments. As this work is ongoing, we will keep collecting new transcriptions of the ABCD retelling subtest to increase the corpus size and obtain more reliable results in our studies. Our final goal is to apply neuropsychological assessment batteries, such as the ABCD retelling subtest, to mobile devices, specifically tablets. This adaptation will enable large-scale applications in hospitals and facilitate the maintenance of application history in longitudinal studies, by storing the results in databases immediately after the test application. § ACKNOWLEDGMENTS This work was supported by CAPES, CNPq, FAPESP, and Google Research Awards in Latin America. We would like to thank NVIDIA for their donation of GPU. acl_natbib§ SUPPLEMENTARY MATERIALFigure <ref> is Cookie Theft picture, which was used in DementiaBank project.Figure <ref> is a sequence of pictures from the Cinderella story, which were used to elicit speech narratives.§.§ Examples of transcriptionsBelow follows an example of a transcript of theCookie Theft dataset.You just want me to start talking ? Well the little girl is asking her brother we 'll say for a cookie . Now he 's getting the cookie one for him and one for her . He unbalances the step the little stool and he 's about to fall . And the lid 's off the cookie jar . And the mother is drying the dishes abstractly so she 's left the water running in the sink and it is spilling onto the floor . And there are two there 's look like two cups and a plate on the sink and board . And that boy 's wearing shorts and the little girl is in a short skirt . And the mother has an apron on . And she 's standing at the window . The window 's opened . It must be summer or spring . And the curtains are pulled back . And they have a nice walk around their house . And there 's this nice shrubbery it appears and grass . And there 's a big picture window in the background that has the drapes pulled off . There 's a not pulled off but pulled aside . And there 's a tree in the background . And the house with the kitchen has a lot of cupboard space under the sink board and under the cabinet from which the cookie you know cookies are being removed .Below follows an excerpt of a transcript of the Cinderella dataset.Original transcript in Portuguese:ela morava com a madrasta as irmã né e ela era diferenciada das três era maltratada ela tinha que fazer limpeza na casa toda no castelo alias e as irmãs não faziam nada até que um dia chegou um convite do rei ele ia fazer um baile e a madrasta então é colocou que todas as filhas elas iam menos a cinderela bom como ela não tinha o vestido sapato as coisas tudo então ela mesmo teve que fazer a roupa dela começou a fazer ...Translation of the transcript in English:she lived with the stepmother the sister right and she was differentiated from the three was mistreated she had to do the cleaning in the entire house actually in the castle and the sisters didn’t do anything until one day the king’s invitation arrived he would invite everyone to a balland then the stepmother is said that all the daughters they would go except for cinderella well since she didn’t have a dress shoes all the things she had to make her own clothes she started to make them ... §.§ Coh-Metrix-Dementia metrics * Ambiguity: verb ambiguity, noun ambiguity, adjective ambiguity, adverb ambiguity; * Anaphoras: adjacent anaphoric references, anaphoric references; * Basic Counts: Flesch index, number of word, number of sentences, number of paragraphs, words per sentence, sentences per paragraph, syllables per content word, verb incidence, noun incidence, adjective incidence, adverb incidence, pronoun incidence, content word incidence, function word incidence; * Connectives: connectives incidence, additive positive connectives incidence, additive negative connectives incidence, temporal positive connectives incidence, temporal negative connectives incidence, casual positive connectives incidence, casual negative connectives incidence, logical positive connectives incidence, logical negative connectives incidence; * Co-reference Measures: adjacent argument overlap, argument overlap, adjacent stem overlap, stem overlap, adjacent content word overlap; * Content Word Frequencies: Content words frequency, minimum among content words frequency; * Hypernyms: Mean hypernyms per verb; * Logic Operators: Logic operators incidence, and incidence, or incidence, if incidence, negation incidence; * Latent Semantic Analysis (LSA): Average and standard deviation similarity between pairs of adjacent sentences in the text, Average and standard deviation similarity between all sentence pairs in the text, Average and standard deviation similarity between pairs of adjacent paragraphs in the text, Givenness average and standard deviation of each sentence in the text; * Semantic Density: content density; * Syntactical Complexity: only cross entropy; * Tokens: personal pronouns incidence, type-token ratio, Brunet index, Honoré Statistics.
http://arxiv.org/abs/1704.08088v1
{ "authors": [ "Leandro B. dos Santos", "Edilson A. Corrêa Jr", "Osvaldo N. Oliveira Jr", "Diego R. Amancio", "Letícia L. Mansur", "Sandra M. Aluísio" ], "categories": [ "cs.CL" ], "primary_category": "cs.CL", "published": "20170426130625", "title": "Enriching Complex Networks with Word Embeddings for Detecting Mild Cognitive Impairment from Speech Transcripts" }
]The orbit method for the Baum-Connes Conjecture for algebraic groups over local function fieldsMathematisches Institut der WWU Münster, Einsteinstrasse 62, 48149 Münster, Germany [email protected] Mathematisches Institut der WWU Münster, Einsteinstrasse 62, 48149 Münster, Germany [email protected] Department of Mathematical Sciences, University of Copenhagen, Universitetsparken 5, DK-2100 Copenhagen Ø, Denmark [email protected]^1 Supported by Deutsche Forschungsgemeinschaft (SFB 878, Groups, Geometry and Actions).^2 Supported by the Danish Council for Independent Research (DFF-5051-00037) and is partially supported by the DFG (SFB 878).^3 Supported by the Danish National Research Foundation through the Centre for Symmetry and Deformation (DNRF92). The main purpose of this paper is to modify the orbit method for the Baum-Connes conjecture as developed by Chabert, Echterhoff and Nestin their proof of theConnes-Kasparov conjecture for almost connected groups <cit.> in order to deal with linear algebraic groups over local function fields (i.e., non-archimedean local fields of positive characteristic). As a consequence, we verify the Baum-Connes conjecture for certain Levi-decomposable linear algebraic groups over local function fields. One of these is the Jacobi group, which is the semidirect product of the symplectic group and the Heisenberg group. [ Ryszard Nest^3 December 30, 2023 =====================4pt§ INTRODUCTION The Baum-Connes conjecture was first introduced by Paul Baum and Alain Connes in 1982 <cit.>. However, the paper was published 18 years later and its current formulation was given in <cit.>. The origin of the conjecture goes back to Connes' foliation theory <cit.> and Baum's geometric description of K-homology theory <cit.>. Consider a locally compact second countable group G and a separable G-C^*-algebra A. Let E(G) denote a locally compact universal proper G-space. Such a space always exists and is unique up to G-homotopy equivalence (see <cit.> and <cit.> for details). The topological K-theory of G with coefficient A is defined asK_*^top(G;A):=lim_X KK^G_*(C_0(X),A), where X runs through all G-invariant subspaces of E(G) such that X/G is compact, and KK^G_*(C_0(X),A) denotes Kasparov's equivariant KK-theory. Let A⋊_rG denote the reduced crossed product of A by G. Then the Baum-Connes conjecture with coefficient A states that the assembly mapμ_A: K_*^top(G;A)→ K_*(A⋊_rG),as constructed by Baum, Connes and Higson in <cit.>, is an isomorphism of abelian groups. If this holds, we say that G satisfies BC (the Baum-Connes conjecture) for A. The Baum-Connes conjecture has been verified for some large families of groups. In particular, Higson and Kasparov in <cit.> proved the Baum-Connes conjecture with arbitrary coefficients for locally compact second countable groups having the Haagerup property (e.g. amenable groups). In <cit.> Lafforgue proved the Baum-Connes conjecture forfor all reductive Lie groups whose semi-simple part has finite center and reductive linear algebraic groups over non-archimedean local fields. Later, Chabert, Echterhoff and Nest in <cit.> used these results together with the permanence properties of the conjecture as studied in <cit.> to verify the Connes-Kasparov conjecture, i.e., the Baum-Connes conjecture with trivial coefficients for all almost connected second countable groups. Using similar methods, they also settled BC with trivial coefficientforall linear algebraic groups over local fields with zero characteristic.However, the conjecture (even for ) is still open for linear algebraic groups over local fields of positive characteristic.A central step in the proof of the Connes-Kasparov conjecture in <cit.> was the development of an orbit method for proving theBaum-Connes conjecture with coefficients built after the Mackey-Rieffel-Green orbit method for computing irreducible representations of crossed products via the the orbit-structure for the action of G on (A). In 2 we shall prove a slightly more general version of the orbit methodfor the Baum-Connes conjecture with coefficient A which will work particularly well if A is a type I C*-algebra. This result, together with the permanence results for group extensions as obtained in <cit.> then allowsto formulate an orbit method for proving the Baum-Connes conjecture with trivial coefficients for a group Gwhich is an extension 1→ N→ G→ G/N→ 1 such that N and G/N satisfy some good properties.The idea of using the orbit method for the Baum-Connes conjecture first appeared in<cit.>, where Chabert and Echterhoff proved the conjecture for k^n⋊_n(k) for an archimedean local field k. In order to apply the orbit method to linear algebraic p-adic groups we needKirillov's orbit method for p-adic unipotent groups, which is established by Moore in <cit.>. Recently, based on Howe's early work in <cit.>, Echterhoff and Klüver obtained a version of Kirillov's orbit method for unipotent groups defined over local fields of positive characteristic p, provided that its nilpotence length is less than p (see<cit.>).Combining this with the general results obtained in 2 we shall prove the following results in 3:[see Corollary <ref>] Let k be a non-archimedean local field of positive characteristic p and let 𝔫 be a finite-dimensional nilpotent k-Lie algebra with nilpotence length l<p. Let N:=exp(𝔫) be the corresponding unipotent k-group and let G be a locally compact second countable exact group. Suppose that G acts continuously on N(k) such that all G-orbits in N̂(̂k̂)̂ are locally closed.Then the semidirect product group N(k)⋊ G satisfies BC for , if the stabilizer G_π satisfies BC for 𝒦(H_π) for all [π]∈N̂(̂k̂)̂. Notice that we do not require G to be a linear algebraic group in Theorem <ref>. If we only consider linear algebraic groups which admit a Levi decomposition over local function fields, we can obtain the following simplified version: [see Theorem <ref>] Let G be a linear algebraic group with a Levi decomposition G=N⋊ R over a non-archimedean local field k of positive characteristic p. Assume that N=exp(𝔫) for a finite-dimensional nilpotent k-Lie algebra 𝔫 whichhas nilpotence length l<p.Then G(k)=N(k)⋊ R(k) satisfies BC for , if the stabilizer R(k)_π satisfies BC for 𝒦(H_π) for all [π]∈N̂(̂k̂)̂. As a consequence, we obtain the following corollary: [see Corollary <ref> and Corollary <ref>]Let k be a non-archimedean local field of positive characteristic p>2 and n∈. Then the following groups satisfy BC for : (1) k^n⋊_n(k), where _n(k) acts on k^n by matrix multiplications.(2) k⋊_n(k), where _n(k) acts on k by g.v=(g)^p · v for g∈_n(k) and v∈ k.(3) _2n+1(k)⋊_2n(k), where _2n+1(k) denotes the 2n+1 dimensional Heisenberg group over k andthe symplectic group_2n(k) acts on k^2n≅_2n+1(k)/Z(_2n+1(k)) by matrix multiplications and trivially on the center Z(_2n+1(k))≅ k.(4) k^2n⋊_2n(k), where_2n(k) acts on k^2n by matrix multiplications.(5) _2n+1(k)⋊_2n(k), where the metaplectic group _2n(k) acts on _2n+1(k) through the action of _2n(k).(6) k^2n⋊_2n(k), where _2n(k) acts on k^2n through the action of _2n(k). In Corollary <ref> the condition p>2 is not necessary for (1) and (2).Acknowledgments. The second-named author would like to thank Roger Howe, Vincent Lafforgue, George McNinch and Maarten Solleveld for helpful and enlightening discussions on a wide variety of topics. § THE ORBIT METHOD FOR THE BAUM-CONNES CONJECTURE WITH COEFFICIENTS In this section we want to discuss a general orbit method inspired by Mackey's theory of induced representationsin order to prove the Baum-Connes conjecture for type I coefficients A depending on the conjecture for the stabilisersG_πfor the action of G on the space A of equivalence classes of irreducible representations of A. The results of this section slightly extend similar results obtained in <cit.>. We need to recall a classical theorem ofGlimm. Recall that a (not necessarily Hausdorff)topological space is called * locally compact if every point has a neighborhood basis consisting of compact sets, and* almost Hausdorff if every non-empty closed subset contains a non-empty relatively open Hausdorff subset.Moreover, a subset Y⊆ X is called locally closed in X if Y is relatively open in its closure Y̅⊆ X.Equivalently, Y isan intersection of an open set and a closed set in X.It is well-known that every locally closed subset of a locally compact space is locally compact. Conversely, every locally compact subset of an almost Hausdorff locally compact space is necessarily locally closed.Let G be a locally compact second countable group and let X be a locally compact second countable almost Hausdorff G-space. Then the following are equivalent: (1) Each orbit G· x is locally closed.(2) For each x∈ X, the map gG_x↦ g· x is a homeomorphism from G/G_x onto G· x.(3) The orbit space X/G is almost Hausdorff.(4) There exists a sequence of G-invariant open subsets {U_ν}_ν of X, where ν runs through the ordinal numbers such that (a) U_ν⊆ U_ν+1 for each ν and (U_ν+1\ U_ν)/G is Hausdorff.(b) If ν is a limit ordinal, then U_ν=⋃_μ<νU_μ.(c) There exists an ordinal number ν_0 such that X=U_ν_0.If X is a second countable locally compact almost Hausdorff G-space which satisfies the items (1),…, (4) in Glimm's theorem,we say that the action of G on X is smooth (or Xis a smooth G-space).We want to use Glimm's theorem to formulate a general orbit method for the Baum-Connes conjecture. For maximal flexibility we wantto consider C*-algebras which can be thought of as section algebras of fields of C*-algebras overan almost Hausdorff space X in the sense that there exists a continuous map φ:(A)→ X.In this situation, if Y⊆ X is any locally closed subset, we may identify the closed subsetφ^-1(Y) with the primitive ideal space of the quotient A/I_Y withI_Y:=∩{P: P∈φ^-1(Y)}. In a second step, we can identify the open subset φ^-1(Y) of φ^-1(Y)=(A/I_Y) withthe primitive ideal space (J_Y/I_Y) where J_Y=∩{Q: Q∈φ^-1(Y∖ Y)}. Hencefor every locally closed subset Y of X we obtain a canonical subquotient A_Y: =J_Y/I_Y of A such that φ^-1(Y)≅(A_Y) In particular, since every one-point set {x} of an almost Hausdorff space X is locally closed, we obtaina canonical subquotient A_x:=J_x/I_x such that φ^-1({x})≅(A_x). We call A_x the fibre of A over x.We should note that if X is Hausdorff, then it follows from the Dauns-Hoffmann theorem (see e.g. <cit.>) that the continuous function φ:(A)→ Xinduces a unique structure Φ:C_0(X)→𝒵ℳ(A) of A as a C_0(X)-algebra (here Φ is a nondegenerate *-homomorphism into the centre 𝒵ℳ(A) of the multiplier algebra of A) such that I_x=∩{P: P∈φ^-1({x})}=Φ(C_0(X∖{x}))A.It follows that the fibre A_x constructed above coincides with the usually defined fibre A_x=A/(Φ(C_0(X∖{x}))A) of theC_0(X)-algebra A. Moreover, if φ:(A)→ X is continuous and open, then the corresponding C_0(X)-algebra isthe section algebra of a continuous field of C*-algebras over X with fibres A_x. For a reference see <cit.>.We are now ready for the formulation of the general orbit method for BC with coefficients: Let α:G→(A) be an action ofan exact second countable group Gona separable C*-algebra A.Supposethat X is a smooth G-space as in Definition <ref> andthat φ:(A)→ X is a continuous and open G-equivariant map. Thenfor each x∈ X the action α of G on A factors through a well-defined action α_x of the stabiliser G_x on the fibre A_xover x. Suppose further that the following two conditions are satisfied: * For each x∈ X the stabiliser G_x satisfies BC for A_x.* The sequence ofG-invariant open subsets {U_ν}_ν of X as in item (4) of Glimm's theorem can be chosen such that (U_ν+1∖ U_ν)/G is either totally disconnected or homeomorphic to the geometric realisation of afinite-dimensional simplicial complex.Then G satisifies BC for A. For the proof we need the following slight generalisation of <cit.>. Suppose that α:G→(A) is an action of the second countable exact group on the separable C*-algebra Aand let X be a second countable locally compact Hausdorff space equipped with the trivial G-action. Suppose further that φ:(A)→ Xisa continuous open G-invariant map such that the following two conditions are satisfied: * For all x∈ X the group G satisfies BC for A_x.* X is either totally disconnected orhomeomorphic to the geometric realisation of afinite-dimensional simplicial complex. Then G satisfies BC for A.It is really annoying that we are not able to remove the condition thatX is totally disconnected orthe geometric realisation of a finite dimensional simplicial complex. Although these assumptionsare satisfied in many important situations, these assumptions result in the corresponding technical formulationsfor the asscending series of sets {U_ν}_ν in the formulation of the orbit method for BC. It would therefore be most desirable to give a proof of the above proposition without these assumptions! We should note that the statementin Proposition <ref> implies that A is the section algebra of a continuous field of C*-algebras overX with fibres A_x such that the action of G on A is C_0(X)-linear (i.e., α_g(Φ(f)a)=Φ(f)α_g(a) for all g∈ G, f∈ C_0(X) anda∈ A).Note first that the only difference to the statement of <cit.> is the fact, that we do not requirethat G has a γ-element. To show that the proof given in <cit.> still holds without the assumption of a γ-element, weneed a substitute of <cit.>. For this we use a recent result of <cit.>, extending an earlier result byOzawa <cit.>, showing that a locally compact second countable group G is exact if and only if it is amenable at infinity, which means that there exists a topologically amenable action of G on a second countable compact Hausdorff space Z, say.It has been shown by Higson <cit.> (see <cit.> for the non-discrete case) that this implies split injectivity for the Baum-Connes assembly map for every coefficient algebra B.We need to recall the arguments for this fact:let Y be the space ofprobability measures on Z, we see that Y is K-equivariantly contractible for any compact subgroup K of G. Then following the arguments of Higson in <cit.> (see also <cit.>), we see that for each G-C*-algebra Bwe obtain a commutative diagram K_*^⊤(G;B)@>μ_B >> K_*(B⋊_rG)@ V(ι_B)_* VV@VV(ι_B⋊_r G)_* VK_*^⊤(G; C(Y)⊗ B)@>>μ_C(Y)⊗ B>K_*((C(Y)⊗ B)⋊_rG)where ι_B: B→ C(Y)⊗ B is given by ι_B(b)=1_Y⊗ b. In this diagram the left vertical map is an isomorphism byan application of the Going-Down principle <cit.> (see the proof of <cit.>). The lower horizontal map is an isomorphism by a result of Tu (combine the main result of <cit.> applied to the amenable groupoidY⋊ G with <cit.>). We therefore obtain a splitting homomorphism σ_B: K_*(B⋊_rG)→ K_*^⊤(G;B) for the assembly map μ_B given by σ_B=(ι_B)_*^-1∘μ_C(Y)⊗ B^-1∘ (ι_B⋊_rG)_*.In particular, we see that γ_B: =μ_B∘σ_B∈(K_*(B⋊_rG)) is an idempotent with γ_B(K_*(B⋊_rG))=μ_B(K_*^⊤(G;B))for all B and it follows that G satisfies the Baum-Connes conjecture for B if and only if (1-γ_B)· K_*(B⋊_rG)={0}. Now, since G is exact, we know that given a short exact sequence of G-algebras 0→ J→ B→ B/J→ 0all maps in diagram (<ref>) are natural with respect to the various six-term exact sequences in K-theory and topological K-theorywhich are attached to thesequence0→ J→ B→ B/J→ 0(see <cit.> for the case of the assembly maps).It follows from this that we obtain a shortexact sequence of the form(1-γ_J)· K_0(J⋊ _rG) @>>> (1-γ_B)· K_0(B⋊ _rG) @>>> (1-γ_B/J)· K_0(B/J⋊ _rG)@A∂ AA @. @VV∂ V(1-γ_B/J)· K_1(B/J⋊ _rG) @<<< (1-γ_B)· K_1(B⋊ _rG) @<<< (1-γ_J)· K_1(J⋊ _rG).This is now a complete analogue of <cit.>, and the proof of the proposition follows now exactly as the proofof <cit.>. We first need to verify that for all x∈ X the action of G on A factors through an action of G_x on the fibre A_x. To see this observe that A_x=J_x/I_x with J_x=∩{P: P∈φ^-1({x}∖{x})} and I_x= ∩{P: P∈φ^-1({x})}. Since φ is G-equivariant and {x} and {x} are G_x-invariantsubsets of X, both ideals areG_x-invariant. But this implies that the action of G_x on A factors through a well-defined action on A_x.Let {U_ν}_ν be a sequence of G-invariant open sets as in the theorem and for each index νlet X_ν:=U_ν+1∖ U_ν. Then X_ν⊆ X is a locally closed G-invariant subset of X and therefore the action of G on A factors through an action α^ν of G on A_ν:=A_X_ν foreach ordinal ν. We first show that G satisfies BC for A_ν for all ν.For this recall that by construction of A_ν we have (A_ν)≅φ^-1(X_ν). Let Y_ν:=X_ν/G and letψ:(A_ν)→ Y_ν denote the composition of φ:(A_ν)→ X_ν with the quotient map X_ν→ Y_ν. Then ψ is a continuousopen G-invariant map and by the assumptions made in the theorem it follows from Proposition<ref> that G satisfies BC for A_ν if we can show that G satisfies BC for A_y for all y∈ Y_ν, where hereA_y is the fibre over y∈ Y_ν with respect to the map ψ. By construction we have (A_y)=ψ^-1({y})=φ^-1(G· x) if y=G· x and therefore A_y=A_G· x, where A_G· x is the subquotient of A corresponding to the locally closed subset G· x={gx: g∈ G} of X with respect toφ. It follows from item (2) in Glimm's theorem that G/G_xis G-homeomorphic to G· x via gG_x↦ gx. Via the identification G/G_x≅ G· x we may view φ as a continuous and G-equivariant mapφ:(A_y)→ G/G_x. It follows then from <cit.> that A_y is isomorphic tothe induced algebra _G_x^G A_x. Since, by assumption, G_xsatisfies BC for A_xit follows then from <cit.> that G satisfies BC forA_y=_G_x^GA_x. In order to complete the proof, let A_U_ν denote the ideal of A corresponding to the open subset U_ν⊆ X. We now want to show that G satisfies BC for A_U_ν for all ν. We do this by transfinite induction. For ν=1 we have A_U_1=A_0 (if we put U_0:=∅, and hence X_0=U_1), and hence G satisifies BC for A_U_1by the arguments in the above paragraph.Assume now that we know that G satisfies BC for A_U_ν for some ν.We have a short exact sequence0→ A_U_ν→ A_U_ν+1→ A_ν→ 0of G-C*-algebras. By the above paragraph we then know that G satisfies BC for A_U_ν and for A_ν. Since G is exact, it follows then from <cit.> that G satisfies BC for A_U_ν+1 as well. Finally, if ν is a limit ordinal, then U_ν=∪_μ<ν U_μ and thereforeA_U_ν=∪_μ<ν A_U_μ=lim_μ<ν A_U_ν. It follows then from<cit.> that G satisfies BC for A_U_ν. Since X=U_ν_0 for some ordinal number ν_0, we may now conclude that G satisfies BC for A. The spectrum  of a C^*-algebra A is the set of all unitary equivalence classes of irreducible representations of A. If A is a separable type I C^*-algebra, then  equipped with the Jacobson topology is a locally compact second countable almost Hausdorff space and the map [π]↦π∈(A) is a homeomorphism between A and (A). Thus in case of type I coefficient algebras A one could choose X=A and φ:(A)→ X the inverse of the above homeomorphism. In this case the fibre A_[π] for some class [π]∈A is isomorphic to 𝒦(H_π), the algebraof compact operators on the Hilbert space H_π of π. To see this recall thatby the type I condition we have 𝒦(H_π)⊆π(A). Let J=π^-1(𝒦(H_π)). Then one easily checks that J=∩{σ: σ∈{π}∖{π}}=J_[π] andhence A_[π]=J_[π]/I_[π]=J/π≅𝒦(H_π).Hence, as a corollary of Theorem <ref> we get Suppose that G is an exact group acting on the separabletype I C*-algebra A such that the action on A is smooth. Suppose further that there is an ascending sequence {U_ν}_ν of open G-invariant subsets of A as in Glimm's theorem such that all difference sets (U_ν+1∖ U_ν)/G are either totally disconnected or the geometric realisations of finite dimensional simplicial complexes. Then G satisfies BC for A if each stabiliser G_π satisfies BC for 𝒦(H_π). If G is a locally compact group, then we may identify its unitary dual Ĝ with the spectrum Ĉ^̂*̂(̂Ĝ)̂ of the full group C^*-algebra of G as topological spaces and we write G_r for the subspace of G corresponding to the quotient C^*_r(G) of C^*(G). If N is a closed normal subgroup of G, then we may decompose C_r^*(G) as a reduced twisted crossed productC_r^*(N)⋊_(α,τ),r(G,N) in the sense of Philip Green <cit.>. It follows from <cit.>that there exists an ordinary action of β:G/N→(B) on a C*-algebra B, which is Morita equivalent to the decomposition twisted action (α,τ)of (G,N) on C_r^*(N).If C_r^*(N) is type I and if G_π⊆ G denotes thestabiliser of [π]∈N_r under the conjugation action, then N⊆ G_π and, similar to the case of ordinary actions as explained above,we obtain a twisted action of (G_π, N) on 𝒦(H_π) for all [π]∈N_r.Since (G,N)-equivariant Morita equivalences induce G/N-equivariant homeomorphisms between the dual spacesand preserve the type I condition (e.g., see <cit.>), the given class [π]∈N_r corresponds to a class [π̃]∈B such that G_π/N=(G/N)_π̃ and the corresponding action of (G/N)_π̃ on 𝒦(H_π̃) is Morita equivalent to the twisted action of (G_π, N) on 𝒦(H_π).In <cit.> the authors extended the the Baum-Connes conjecture to the case of twisted actions and showed in particular that the conjecture is invariant under Morita equivalence of twisted actions. Therefore (G,N) satisfies BC for C_r^*(N) if and only if G/N satisfies BC for B and (G_π, N) satisfies (twisted) BC for 𝒦(H_π) if and only if (G/N)_π̃ satisfies BC for 𝒦(H_π̃).Using this, we now get Suppose that N has the Haagerup property and is a closed normal subgroup of a second countable locally compact group G such that the following conditions are satisfied: * G/N is an exact group.* C_r^*(N) is type I and the action of G/N on N_r is smooth.* For all π∈N_r the pair (G_π, N) satisfies (twisted) BC for 𝒦(H_π).* There exists asequence of G-invariant open subsets {U_ν}_ν of N_r as in item (4) of Glimm's theorem suchthat (U_ν+1∖ U_ν)/G is either totally disconnected or homeomorphic to the geometric realisation of a finite simplicial complex .Then G satisfies the Baum-Connes conjecture for . Since N is assumed to have the Haagerup property, it follows from <cit.> that G satisfies BC forif and only if(G,N) satisfies BC for C_r^*(N) with respect to the twisted decomposition action (α,τ).Let β:G/N→(B) be an ordinary action of G/N which is Morita equivalent to the twisted action (α,τ). Then all assumptions of the corollary carry over to the action β – in particular we have that each stabiliser (G/N)_π̃satisfies BC for 𝒦(H_π̃) for all π̃∈B. Thus it follows fromCorollary <ref> that G/N satisfies BC for B. But by Morita equivalence wethen conclude that (G,N) satisfies BCfor C_r^*(N).If in the above corollary the group G is a semidirect product N⋊ G/N (i.e., the extension 1→ N→ G→ G/N→ 1 is split) then there exists an ordinary action β: G/N→(C_r^*(N)) such that C_r^*(G)≅ C_r^*(N)⋊_β,r G/N. In this case it is not necessary to use twisted actions and we can directly work with the action of G/N on C_r^*(N) in the above corollary. This will be the situation in most of our examples below. § BC FOR CERTAIN ALGEBRAIC GROUPS OVER LOCAL FUNCTION FIELDSIn this section, we will use the orbit method for group extensions to verify the Baum-Connes conjecture for groups of k-rational points of some Levi-decomposable linear algebraic groups over a local function field k. We begin with a recapitulation of Kirillov's orbit method for unipotent groups N, which describes the unitary dual N in terms ofthe co-adjoint orbits in 𝔫^*, where 𝔫 denotes a suitable version of a Lie algebra for N. Let k be a (non-archimedean) local field of positive characteristic p and let 𝔫 be any finite-dimensional nilpotent Lie algebra over k with nilpotence length l<p. It is well-known from <cit.> that 𝔫 admits a faithful finite-dimensional linear representation by nilpotent matrices, which we can simultaneously triangularize. Hence, the Campbell-Hausdorff formula implies that N:=exp(𝔫) is a linear algebraic unipotent group defined over k and its Lie algebra is 𝔫. Here we write N for its k-rational points N(k) for simplicity.The adjoint action Ad:NGL(𝔫) is defined byAd(n)(X):=log(nexp(X)n^-1) for n∈ N and X∈𝔫,where log: N𝔫 denotes the inverse of exp. Let 𝔫^* denote the space of k-linear functionals of 𝔫 and let Ad^*:NGL(𝔫^*) be the co-adjoint action given by Ad^*(n)(f)=f∘Ad(n^-1) for n∈ N and f∈𝔫^*, then Ad^* is a k-rational action as p>l. Hence, all Ad^*(N)-orbits are locally closed in the Hausdorff topology of 𝔫^* (see <cit.> or Theorem <ref>). It turns out that the quotient space 𝔫^*/Ad^*(N) is homeomorphic to the unitary dual N̂ of N.For a description of this homeomorphism, let Λ_l denote the ring [1/l!] for l the nilpotence length of 𝔫. Then 𝔫 becomes a Lie-algebra over the ring Λ_l as considered in <cit.>. Now we fix a character ε∈k̂ of order zero (see <cit.>). For f∈𝔫^*, let 𝔪_f be a maximal closed Λ_l-Lie subring of 𝔫 which is subordinate to f in the sense that ϵ∘ f([𝔪_f,𝔪_f])={0}, where [· ,·] denotes the Lie bracket. If M_f:=exp(𝔪_f)⊆ N, then the map m↦χ_f(m):=ε(f(log(m))) defines a character of M_f.The following theorem then follows from <cit.> or <cit.> (observe thatin this situation we have 𝔫^*≅𝔫 via f↦ε∘ f by <cit.>).In the above situation theinduced representation π_f:=Ind_M_f^N χ_f is an irreducible unitary representation of N and its equivalence class does not depend on the choice of 𝔪_f.The resulting map 𝔫^*N̂ given by f↦π_f, is constant on Ad^*(N)-orbits and it induces a homeomorphism between 𝔫^*/Ad^*(N) and N̂.Moreover, the group C^*-algebra C^*(N) is type I.Let N=exp𝔫 as in the above discussion. Let R be any locally compact (not necessarily algebraic) group acting continuously on Nand let G:=N⋊ R. As before, we define the adjoint action Ad of G on 𝔫 by the formulaAd(g)(X):=log(gexp(X)g^-1) for g∈ G and X∈𝔫.Let Ad^*:GGL(𝔫^*) be its co-adjoint action given by Ad^*(g)(f)=f∘Ad(g^-1) for g∈ G and f∈𝔫^*. Let 𝔪_f be a maximal closed Λ_l-Lie subring of 𝔫 subordinate to f.It is straightforward to see that Ad(g)(𝔪_f) is amaximal Λ_l-Lie subring of 𝔫 subordinate to Ad^*(g)(f), hence we may choose 𝔪_Ad^*(g)(f)=Ad(g)(𝔪_f) for g∈ G and f∈𝔫^*.Hence,M_Ad^*(g)(f)=exp(𝔪_Ad^*(g)(f))=exp(Ad(g)(𝔪_f))=gM_fg^-1.Let χ^g_f:gM_fg^-1𝕋 be the character of gM_fg^-1 given by χ^g_f(gm g^-1):=χ_f(m), m∈ M_f. Then a short computation shows that χ_f^g=χ_Ad^*(g)(f), which implies thatπ_Ad^*(g)(f):=Ind^N_M_Ad^*(g)(f)χ_Ad^*(g)(f)=Ind^N_gM_fg^-1χ_f^g≅ (Ind^N_M_fχ_f)^g=π_f^g,where π_f^g(n):=π_f(g^-1ng) denotes the g-conjugate of π_f in N (see e.g. <cit.>). If g=(n,r)∈ N⋊ R and f∈𝔫^*, then π_Ad^*(g)(f)≅π_f^g=(π_f^r)^n≅π_f^r.It follows that the Kirillov-orbit homeomorphism 𝔫^*/Ad^*(N) N̂ is Ad^*(R)-R equivariant if wedefine the action of R on N by r·π:=π^r. In particular, the Kirillov mapinduces a homeomorphism 𝔫^*/Ad^*(G)=(𝔫^*/Ad^*(N))/ Ad^*(R) ≅N̂/R. Let k be a local field of positive characteristic p. Then we have (x+y)^p=x^p+y^p for all x,y∈ k. The fake Heisenberg group is defined as follows:N:={[ 1 a b; 0 1 a^p; 0 0 1; ]: a,b∈ k}_.It is a non-abelian k-split connected unipotent linear algebraic group with nilpotence length two. Consider its Lie ring𝔫:=log(N)={[ 0 x y; 0 0 x^p; 0 0 0; ]: x,y∈ k}_.Since k is not perfect, 𝔫 is not stable under scalar multiplication. In particular, 𝔫 is not a vector subspace of M_3(k) and is different from Lie(N), the Lie algebra associated to N in the sense of algebraic geometry. Hence, 𝔫^* does not make sense as set of linear functionalsfrom 𝔫 to k. But the orbit method still makes sense if we replace 𝔫^* by the Pontryagin dual 𝔫̂ of the additive group 𝔫 (see <cit.>). However, we do not know whether all Ad^*(N)-orbits are locally closed in the Hausdorff topology of 𝔫̂. Let k be a non-archimedean local field of positive characteristic p and let 𝔫 be a finite-dimensional nilpotent k-Lie algebra with nilpotence length l<p. Let N:=exp(𝔫) be the corresponding unipotent k-group and let R be a locally compact second countable exact group.Suppose that R acts continuously on N such that all R-orbits in N̂ are locally closed andthat for all [π]∈N̂ the stabilizersR_π={ g∈ R: g·[π]=[π]} satisfy BCfor 𝒦(H_π).Then the semidirect product group G=N⋊ R satisfies BC for . Since nilpotent groups are amenable, we have C^*(N)=C_r^*(N). According to Theorem <ref>,C^*(N) is type I and the Kirillov-orbit map 𝔫^*/Ad^*(N) N is a homeomorphism. Since all R-orbits in N̂ are locally closed, it follows from Glimm's Theorem that there exists a sequence of G-invariant open subsets {U_ν}_ν of N̂ such that (U_ν+1\ U_ν)/G is Hausdorff. By Corollary <ref> it suffices to show that the Hausdorff space X_ν:=(U_ν+1\ U_ν)/G is totally disconnected for every ν. By Remark <ref> we may identify N̂/G≅N/R with 𝔫^*/Ad^*(G) as topological spaces. If p:𝔫^*𝔫^*/Ad^*(G) denotes the orbit map, then p^-1(X_ν) is locally closed in the locally compact Hausdorff totally disconnected space 𝔫^*. Hence, p^-1(X_ν) is a locally compact Hausdorff totally disconnected Ad^*(G)-invariant space. Hence, p restricts to the orbit map from p^-1(X_ν) onto X_ν=p^-1(X_ν)/Ad^*(G). We conclude that X_ν is totally disconnected.Recall that the group G=𝐆(k) of k-rational points of a linear algebraic group 𝐆 is isomorphic to a Zariski closed (in particular, Hausdorff closed) subgroup of _n(k) for some n∈. The unipotent radical 𝐍 of 𝐆 is the largest connected unipotent normal subgroup of 𝐆(k̅), where k̅ is an algebraic closure of k. A linear algebraic group is called reductive if its unipotent radical is trivial. One says that a linear algebraic group 𝐆 defined over a field k has a Levi decomposition over k if its unipotent radical 𝐍 is defined over k and there exists a Zariski closed reductive k-subgroup 𝐑, called a Levi factor, of 𝐆 such that the product mapping 𝐍⋊𝐑𝐆 given by (x,y)↦ xy is a k-isomorphism of algebraic groups. In particular, G≅ N⋊ R if N=𝐍(k) and R=𝐑(k) denote the k-rational points of 𝐍 and 𝐑, respectively. It is well-known that linear algebraic groups in characteristic zero always have a Levi decomposition (see e.g. <cit.>). When the field has positive characteristic, 𝐆 need not have a Levi factor and the unipotent radical of 𝐆 is in general not defined over the field (see <cit.> for more details about Levi decompositions over fields of positive characteristic). Therefore, we will only consider such linear algebraic groups which have a Levi decomposition over a local function field.The orbit method for the Baum-Connes conjecture for the group G=𝐆(k) of k-rational points ofa linear algebraic group 𝐆 over a local function field k can be simplified: since _n(k) admits a solvable cocompact closed subgroup, it follows from <cit.> that _n(k) is an exact group. Hence, the closed subgroup G⊆_n(k) is also exact by <cit.>.Moreover, we have the following well-known theorem about locally closed orbits: Let k be a non-archimedean local field and 𝐗 be an algebraic k-variety. If 𝐆 is a linear algebraic k-group, which acts k-rationally on 𝐗, then the induced action of G=𝐆(k) on X=𝐗(k) is continuous and all G-orbits in X are locally closed.The following theorem is now almost an immediate consequence of Corollary <ref>:Let 𝐆 be a linear algebraic group with a Levi decomposition 𝐆=𝐍⋊𝐑 over a non-archimedean local field k of positive characteristic p. Let N=𝐍(k), R=𝐑(k) and G=N⋊ R (=𝐆(k)). Assume that N=exp(𝔫) for a finite-dimensional nilpotent k-Lie algebra 𝔫 with nilpotence length l<p.Then G=N⋊ R satisfies BC for , if all stabilizers R_πsatisfy BC for 𝒦(H_π) for all [π]∈N̂. By Corollary <ref> it suffices to show that R acts continuously on N and all R-orbits in N̂ are locally closed. The continuity of the action follows directly from Theorem <ref>. By Glimm's Theorem and Remark <ref>, we only have to show that N̂/R≅𝔫^*/Ad^*(G) is almost Hausdorff. Using Glimm's theorem again, this follows if the Ad^*(G)-orbits in𝔫^* are locally closed. Since the co-adjoint action Ad^* of G on 𝔫^* is k-rational when l<p, this follows from Theorem <ref>.In order to give some examples for whichthe above theoremapplies, we use the following proposition, which is a slight extension of <cit.>.Let k be a non-archimedean local field and G=𝐆(k) be the k-rational points of a reductive linear algebraic group 𝐆. If 1 CG̃ G 1 is a (not necessarily algebraic) topological group extension with a compact second countable normal subgroup C, then G̃ satisfies BC for the compact operators 𝒦 on all separable Hilbert spaces and with respect to any actions. It is well-known that G acts continuously and properly isometrically on its affine Bruhat-Tits building ℬ(G) (see e.g.<cit.>). Since C is compact,it follows that the action of G on ℬ(G) inflates to a proper action ofG̃. It is shown in <cit.> that G has the property (HC) in the sense of <cit.>. It follows then from <cit.> that G̃ also has the property (HC).In order to show that G̃ satisfies BC for 𝒦 with respect to any actions, it suffices to show that every central extension G̅ of G̃ by 𝕋 satisfies BC forby <cit.>. By the same arguments as before, we conclude that G̅ also acts continuously properly isometrically on ℬ(G) and G̅ has the property (HC). Hence, G̅ satisfies BC forby <cit.>. Proposition <ref> implies that the Baum-Connes conjecture holds for all Levi-decomposable linear algebraic groups with a trivial unipotent radical, which has nilpotence length zero. In the following corollary we deal with vector groups, which have nilpotence length one. Let k be a non-archimedean local field of positive characteristic p and letn∈. Then the following groups satisfy BC for : (1) k^n⋊_n(k), where _n(k) acts on k^n by matrix multiplications.(2) k⋊_n(k), where _n(k) acts on k by g.v=(g)^p · v for g∈_n(k) and v∈ k.Since _n(k) acts on k^d by k-linear endomorphism for d∈{1,n}, it follows that k̂^̂d̂ is _n(k)-equivariantly topologically isomorphic to k^d with the action of _n(k) given by (g,v)↦ (g^-1)^t· v for g∈_n(k) and v∈ k^d. By Theorem <ref> it suffices to show that the stabilizer _n(k)_v satisfies BC forfor all v∈ k^d.(1): Since _n(k) acts transitively on k^n\{0}, there are only two orbits: {0} and k^n\{0}. Then the following are all stabilizers up to conjugacy: _n(k)_{0}=_n(k) and _n(k)_e_1=k^n-1⋊_n-1(k), where e_1=(1,0,…,0)^t∈ k^n. Since _n(k) is reductive, it satisfies BC forby Proposition <ref>. Moreover, k^n-1⋊_n-1(k) satisfies BC forby an induction argument as long as the conjecture holds for k⋊_1(k). Since k⋊_1(k) is amenable, we complete the proof by <cit.>.(2): We have to show that _n(k)_v satisfies BC forfor every v∈ k. It is clear that _n(k)_{0}=_n(k). Let v∈ k\{0}. Since k has characteristic p, it follows that_n(k)_v ={g∈_n(k): (g)^p-1=0}={g∈_n(k): ((g)-1)^p=0}={g∈_n(k): (g)-1=0}=_n(k).Since both _n(k) and _n(k) are reductive, we are done by Proposition <ref>. By a proof almost identical to the proof of Corollary <ref> (1) we see that k^n⋊_n(k) satisfies BC forfor a local (function) field k. This is already known from <cit.>. We end this paper with some more advanced examples. One of these is the Jacobi group, which is the semidirect product of the symplectic group and the Heisenberg group. Its unipotent radical is the Heisenberg group, which has nilpotence length two.Let k be a non-archimedean local field. We will assume that k is not of characteristic 2. Recall that the symplectic group _2n(k) is the closed subgroup of _2n(k) consisting of all matrices g with g^tJg=J, where J=[0I_n; -I_n0 ]and I_n is the n× n identity matrix. Observe that _2(k)=_2(k). The symplectic group _2n(k) has a unique perfect two-fold central extension _2n(k) which is called the metaplectic group (see <cit.>): 1_2_2n(k) _2n(k)1.However, _2n(k) is not a linear group. Since exactness is stable under extensions (see <cit.>), we see that _2n(k) is an exact group.Moreover, _2n(k) satisfies BC for the compact operators 𝒦 with respect to any actions by Proposition <ref>. Let _2n(k) act on k^2n by matrix multiplications and let _2n(k) act on k^2n through the action of _2n(k).Now consider the symplectic form ω on k^2n given byω(x,y)=x^tJy,for x,y∈ k^2n.The (2n+1)-dimensional Heisenberg group over k is the group _2n+1(k) with underlying set k^2n× k and group multiplicationgiven by(x,λ)(y,μ)=(x+y,λ+μ+ω(x,y)/2),for x,y∈ k^2n and λ,μ∈ k.The symplectic group _2n(k) acts on _2n+1(k) by automorphisms:g· (x,λ)=(g x,λ)for g∈_2n(k), x∈ k^2n and λ∈ k.Let _2n(k) act on _2n+1(k) through the action of _2n(k).Let k be a non-archimedean local field of positive characteristic p>2 and n∈. Then the following groups satisfy BC for : (1) _2n+1(k)⋊_2n(k),(2) k^2n⋊_2n(k),(3) _2n+1(k)⋊_2n(k),(4) k^2n⋊_2n(k).(1): It follows from Theorem <ref> that it suffices to show that _2n(k)_π satisfies BC for 𝒦(H_π) for all [π]∈_̂2̂n̂+̂1̂(̂k̂)̂. It is well-known from the Stone-von Neumann theorem that _̂2̂n̂+̂1̂(̂k̂)̂={[χ_v,w]}_v,w∈ k^n∪{[π_λ]}_λ∈ k\{0}, where χ_v,w is a unitary character and π_λ is an infinite-dimensional irreducible unitary representation of _2n+1(k) on L^2(k^n). If g∈_2n(k), then the action is given byg·χ_v,w=χ_(g^-1)^t· (v,w)^tfor all v,w∈ k^n andg·π_λ≅π_λfor all λ∈ k\{0}.We refer to <cit.> and <cit.> for more details about the Stone-von Neumann theorem.Since _2n(k)_π_λ=_2n(k) is reductive, it satisfies BC for 𝒦(H_π_λ) by Proposition <ref>. Since {[χ_v,w]}_v,w∈ k^n≅k̂^̂2̂n̂ is_2n(k)-equivariantly topologically isomorphic to k^2n, it is equivalent to show that _2n(k)_(v,w)^t satisfies BC forfor all v,w∈ k^n.Since _2n(k) is stable under transpositions, it acts transitively on k^2n\{0}. Hence, the following are all stabilizers up to conjugacy: _2n(k)_{0}=_2n(k) and _2n(k)_e_1=_2(n-1)+1(k)⋊_2(n-1)(k) for n≥ 2, where e_1=(1,0,…,0)^t∈ k^2n. Since _2n(k) is reductive, it satisfies BC forby Proposition <ref>. Moreover, _2(n-1)+1(k)⋊_2(n-1)(k) satisfies BC forby an induction argument as long as the conjecture holds for _3(k)⋊_2(k). Since _2(k)=_2(k) has the Haagerup property, it follows from <cit.> and <cit.> that _3(k)⋊_2(k) satisfies BC for . Therefore, we complete the proof.(2): Since _2(k)=_2(k) has the Haagerup property, it follows from <cit.> and <cit.> that k^2⋊_2(k) satisfies BC for . We may assume that n≥ 2.It follows from Theorem <ref> that it suffices to show that _2n(k)_v satisfies BC forfor all v∈ k^2n with the action of _2n(k) given by (g,v)↦ (g^-1)^t· v for g∈_2n(k) and v∈ k^2n. Since _2n(k) is stable under transpositions, it acts transitively on k^2n\{0}. Hence, the following are all stabilizers up to conjugacy: _2n(k)_{0}=_2n(k) and _2n(k)_e_1=_2(n-1)+1(k)⋊_2(n-1)(k) for n≥ 2. All of them satisfy BC foras we have already shown in (1). Therefore, we complete the proof of (2).Since _2n(k) is not a linear group, we have to apply Corollary <ref> for (3) and (4). The metaplectic group _2n(k) is a locally compact second countable exact group. Moreover, it acts continuously on _2n+1(k) and k^2n such that all _2n(k)-orbits in _̂2̂n̂+̂1̂(̂k̂)̂ and k̂^̂2̂n̂ are locally closed as it acts through the action of _2n(k).The proofs for (3) and (4) are almost identical to the proofs for (1) and (2) as long as we notice the following facts. _2n(k)_π_λ=_2n(k) satisfies BC for the compact operators 𝒦 with respect to any actions and _2n(k)_e_1=_2(n-1)+1(k)⋊_2(n-1)(k) for n≥ 2. Finally, we notice that _2(k) has the Haagerup property by <cit.>. We leave the details to the reader.We hope that the methods described in this paper will eventually help to find a proof of the Baum-Connes conjecture with trivial coefficient for all (groups of k-rational points of) linear algebraic groups over a local field k with positive characteristic.The case of local fields with zero characteristic has been solved in <cit.> using similar methods.plain
http://arxiv.org/abs/1704.08548v1
{ "authors": [ "Siegfried Echterhoff", "Kang Li", "Ryszard Nest" ], "categories": [ "math.KT" ], "primary_category": "math.KT", "published": "20170427131633", "title": "The orbit method for the Baum-Connes Conjecture for algebraic groups over local function fields" }
Researchaddressref=aff1,aff13, corref=aff1, [email protected]]DKLODaniel K L Oi addressref=aff2,aff9,[email protected] ]ALAlex Ling addressref=aff3,[email protected] ]GVGiuseppe Vallone addressref=aff3,[email protected] ]PVPaolo Villoresi addressref=aff10,[email protected] ]SGSteve Greenland addressref=aff10,[email protected] ]EKEmma Kerr addressref=aff5,[email protected] ]MMMalcolm Macdonald addressref=aff6,[email protected] ]HWHarald Weinfurter addressref=aff7,[email protected] ]JMKHans Kuiper addressref=aff11,aff12,[email protected] ]ECEdoardo Charbon addressref=aff8,[email protected] ]RURupert Ursin [id=aff1] SUPA Department of Physics, University of Strathclyde,John Anderson Building, 107 Rottenrow East,G4 0NG Glasgow, UK[id=aff13] Strathclyde Space Institute, James Weir Building, University of Strathclyde, 75 Montrose Street, G1 1XJ Glasgow, United Kingdom[id=aff2] Centre for Quantum Technologies, National University of Singapore, Block S15, Science Drive 2, 117543Singapore[id=aff9] Dept. of Physics, National University of Singapore, 2 Science Drive 3, 117542 , Singapore [id=aff3] Dipartimento di Ingegneria dell'Informazione, Università degli Studi di Padova, Via Giovanni Gradenigo, 6, 35131 Padova, Italy[id=aff10] Advanced Space Concepts Laboratory, Mechanical and Aerospace Engineering, University of Strathclyde, James Weir Building, 75 Montrose Street, G1 1XJ Glasgow, United Kingdom [id=aff5] Scottish Centre of Excellence in Satellite Applications, Technology and Innovation Centre, 99 George Street, G1 1RD Glasgow, United Kingdom[id=aff6] Department für Physik, Ludwig-Maximilians-Universität, Schellingstr. 4/III, D-80799 Munich, Germany[id=aff7] Space Systems Engineering, Aerospace Engineering, Delft University of Technology, Kluyverweg 1, 2629 Delft, The Netherlands [id=aff11] AQUA, EPFL, Rue de la Maladière 71b, Case postale 526, CH-2002 Neuchâtel 2 Lausanne, Switzerland [id=aff12] Delft University of Technology, Mekelweg 4, 2628 Delft, The Netherlands [id=aff8] Institute for Quantum Optics and Quantum Information - Vienna Austrian Academy of Sciences, Boltzmanngasse 3, 1090 Vienna, Austria Quantum communication is a prime space technology application and offers near-term possibilities for long-distance quantum key distribution (QKD) and experimental tests of quantum entanglement. However, there exists considerable developmental risks and subsequent costs and time required to raise the technological readiness level of terrestrial quantum technologies and to adapt them for space operations. The small-space revolution is a promising route by which synergistic advances in miniaturization of both satellite systems and quantum technologies can be combined to leap-frog conventional space systems development. Here, we outline a recent proposal to perform orbit-to-ground transmission of entanglement and QKD using a CubeSat platform deployed from the International Space Station (ISS). This ambitious mission exploits advances in nanosatellite attitude determination and control systems (ADCS), miniaturised target acquisition and tracking sensors, compact and robust sources of single and entangled photons, and high-speed classical communications systems, all to be incorporated within a 10kg 6litre mass-volume envelope. The CubeSat Quantum Communications Mission (CQuCoM) would be a pathfinder for advanced nanosatellite payloads and operations, and would establish the basis for a constellation of low-Earth orbit trusted-nodes for QKD service provision. CubeSat quantum entanglement cryptography § INTRODUCTION Quantum technologies are advancing at a rapid rate, with quantum key distribution (QKD) for secure communication being the most mature. Current fibre-based systems are best suited for short-range (a few 100km) applications due to fibre attenuation restricting the maximum practical distance [The development of quantum memories for quantum repeaters is a long-term solution to this short range but is far from maturity <cit.>.]. Free-space optical transmission is another option but limited sight lines and horizontal atmospheric density again restricts its range. Satellite-based QKD systems have been proposed for establishing inter-continental QKD links <cit.>. Feasibility of different aspects of the concept have been demonstrated by Earth-based experiments such as the transmission of quantum entanglement over 144km <cit.>, performing QKD from an aircraft to ground <cit.>, ground to air <cit.>, receiving single photons from retroreflectors in orbit <cit.> and other moving platforms <cit.>. Various groups around the world are working towards space-based demonstrations of quantum communication <cit.> but most have not been successfully launched. Only recently, the 600kg Quantum Experiments at Space Scale (QUESS) Satellite was launched on 17 August 2016, at 17:40 UTC by the China National Space Agency <cit.>.A barrier to experimental progress in this area has been the challenge of translating terrestrial quantum technology to the space environment, particularly in the context of the traditional “big-space” paradigm of satellite development and operations. This is characterized by large, long-term, high performance spacecraft with redundant systems following conservative design practice driven in part by the high cost of launch and satellite operations [For example, Gravity Probe B cost USD750M and took over 50 years of development <cit.>, whilst the Hubble Space Telescope cost USD4.7B to launch <cit.> and 20 years of development though these represent extreme examples of large space missions.]. A new paradigm has arisen, “Micro-Space” as embodied in the CubeSat standard <cit.>, that upturns the satellite development process. This approach exploits contemporary developments in miniaturization of electronics and other satellite systems to allow the construction and operation of highly capable spacecraft massing in the kilogram range, so-called nanosats [The zoology of satellite classes includes mediumsats (500-1000kg) minisats (100-500kg), microsats (10-100kg), nanosats (1-10kg), picosats (0.1-1kg), and femtosats (<0.1kg) as well as large sats (>1000kg).]. In contrast, a geostationary communication satellite is typically 1000 times greater in mass. As cost of development, launch, and operations scales with mass, nanosatellites offer access to space at a vastly reduced cost that is affordable by small companies and research groups <cit.>. The CubeSat standard was itself originally designed with undergraduate engineering educational projects in mind. Since the establishment of the CubeSat standard in 2000, it has become a very popular class of satellite ranging from hobbyists <cit.>, some countries first spacecraft <cit.>, basic space science <cit.>, to commercial services such as Earth imaging <cit.> and asset tracking <cit.>. The standardized nature of the CubeSat platform has attracted commercial support for components and subsystems. It is possible to order online all the parts needed to assemble a fully functional CubeSat including structures, power systems, communications, ADCS, control, as well as basic payloads such as imagers. CubeSats are being launched in great numbers with over 120 launched in 2015, and 118 in 2014, with the proportion of commercial, scientific, and governmental use now the majority <cit.> showing the transition from a purely educational tool to a valid applications platform (Fig. <ref>).The role of CubeSats [For the purposes of this article, we will use the terms CubeSat and nanosat interchangeably.] for space quantum technologies is two-fold <cit.>: firstly in the short term for pathfinder, technology demonstration, and derisking missions; secondly in the long-term for service provision for certain applications. CubeSats are not a panacea but their advantages of lower-cost, shorter development times, rapid and multiple deployment opportunities may be valuable for making more rapid progress in space quantum technologies. The CubeSat Quantum Communications Mission (CQuCoM) has been proposed to achieve at low cost and development time the key milestone of transmission of quantum signals from an orbiting source to a ground receiver. The goals are to perform quantum key distribution and the establishment of entanglement between space and ground. The mission would also represent a leap in capability for nanosatellites, especially for pointing and for carrying fundamental physics experiments. It is an extremely challenging project that stacks a number of critical systems engineering fields, in particular the combination of extreme high pointing accuracy and subsequent ADCS requirements and interactions. We present the mission concept, the key challenges, and outline the systems to be developed to overcome them.§ MISSION CONOPS The concept of operations (CONOPS) is presented in Fig. <ref>. The basic task is to send quantum signals at the single photon level from an orbiting platform to a ground receiver. This paradigm was selected as it typically results in approximately 10 dB improvement in link loss when compared to the ground-to-space scenario. Two quantum sources are envisaged, a weak coherent pulse (WCP) source for performing a BB84-type QKD protocol, and an entangled photon pair source that would send one-half of each entangled photon pair to the ground receiver and retain for analysis the other half. The low-Earth orbit (LEO) reduces the losses due to range and simplifies space deployment, but introduces other challenges such as residual atmospheric disturbance. The major hurdle to overcome is the extremely high pointing accuracy required to minimize the link loss associated with free-space transmission over several hundred to a thousand kilometres.The preliminary mission design calls for the launch of a 6U CubeSat [A 1-unit (1U) CubeSat is nominally a 10cm cube of mass 1kg. Several units can be combined to create CubeSats of greater mass, volume, and capability. Extensions to the standard allow for higher densities, up to 2kg per U <cit.>.] to the International Space Station (ISS). An advantage of CubeSats (shared by other smallsats) is that it is delivered to the launch provider in a standardized container (deployer) format, such as PPOD or IPOD, that greatly simplifies the process of integration of the smallsat with the launch vehicle <cit.>. Regular resupply launches to the ISS gives greater mission flexibility for satellite development and operation. Commercial launch brokers provide streamlined access to space, a 6U CubeSat can be launched within 6 months of contract signing and for USD545K <cit.>. Baselining the ISS as a deployment platform removes uncertainty about orbital parameters and eases mission planning.§ CUBESAT PLATFORM The 6U platform was selected as it is the largest commonly handled CubeSat size whose cost/capability trade-off is favourable for many high-performance nanosatellite missions <cit.>. Several design studies have used 6U CubeSats for Earth observation as it can accommodate a reasonably large optical assembly together with ancillary payloads <cit.>. Flown 6U missions include Perseus-M 1 & 2 (19th June 2014 DNEPR), VELOX-II (6th December 2015 PSLV) and ^3CAT-2 (15th August 2016) demonstrating system qualification compliance. There are approximately 65 6U missions under development. The use of CubeSats is not restricted to Earth orbit. A pair of 6U satellites, Mars Cube One, are to be used as interplanetary relay stations for the Mars lander InSight originally due for launch in 2016 (now scheduled for 2018 due to problems unrelated to the CubeSats) <cit.>, demonstrating the capability that can be packed into this format.An advantage of the CubeSat approach is the availability of conventional off-the-shelf (COTS) components in order to reduce costs and development time. The CQuCoM CubeSat will be based upon the PICosatellite for Atmospheric and Space Science Observations (PICASSO) platform developed by Clyde Space Ltd <cit.>. Though PICASSO is a 3U CubeSat, its systems can be used in a 6U structure with little modification. The platform provides an electrical power system (EPS), communications (COMMS), attitude determination and control systems (ADCS), and an on-board computer (OBC). Integration of the payload with the platform would be performed using the NANOBED facility at the University of Strathclyde. We outline the key specifications of the CQuCoM platform below. §.§ StructureThese systems would be placed into a 6U (nominal 12cm× 24cm× 36cm) structure <cit.>. The CubeSat volumetric breakdown consists of 2U allocated to platform systems mentioned above, 1U to the quantum source, and 3U for the transmission optics (Fig. <ref>). Suitable 6U structures are available from a variety of vendors such as Innovative Solutions in Space <cit.> and Pumpkin <cit.>.§.§ EPSThe satellite is powered by body-mounted solar panels that feed the EPS for storage and distribution of power. The low duty cycle of the transmission experiment eliminates the need for a deployable solar array reducing cost and complexity whilst increasing reliability. The lack of extraneous projected area also reduces the possibility of atmospheric buffeting. Orbit averaged power is 11W assuming 80% sun-tracking efficiency. As transmission experiments are performed during eclipse, the EPS must be able to support the payload power draw using battery reserves alone. A 30WHr lithium-ion battery pack has been sized to support mission operations with sufficient depth of discharge margin to prevent cell degradation from repeated experimental runs. §.§ COMMSSeveral radio systems are employed for (classical) communications. A UHF dipole array is used for tracking, telemetry, and control (TT&C) and provides redundancy for low-speed data transmission (100kb/s). An S-band patch antenna is used for high-speed uplink (nominally 1Mb/s). For high speed downlink of mission data, X-band CubeSat transmitters are commercially available and provide up to 100Mb/s data rate <cit.>. A GPS patch antenna is also incorporated into a face of the CubeSat. Space-rated GPS systems enable tracking of position and velocity to metre and sub-m.s^-1 accuracy respectively <cit.>. Onboard GPS enables precise orbital determination and calibration of two-line element measurements, necessary for the OGS to initially acquire the satellite and also for the ADCS to point the transmitter telescope towards the OGS to enable the optical beacon tracker (OBT) to lock onto the beacon sent up by the OGS. §.§ OBCThe on-board computer is responsible for routine operations of the spacecraft. Low power space qualified processors and memory are available for CubeSats from a variety of vendors, typically based upon ARM devices and flash storage. The OBC will support different mission modes including initial switch-on and detumbling, charging, RAM attitude keeping, experiment, and data download modes. Failsafe modes including in-orbit reset will be included. A facility to update operational software is desirable as this allows experiments to be performed that were not envisaged prior to launch.§.§ ADCS The ADCS is used to provide coarse pointing by rotating the CubeSat body to align the transmitting telescope with the optical ground station during quantum transmission. The required level of ADCS accuracy has previously been challenging to achieve in nanosatellites due to a lack of high performance star trackers suitable for CubeSat applications. Only recently has there been commercial availability of such systems such as Blue Canyon Technologies XACT ACDS <cit.> with similar systems available from Maryland Aerospace <cit.> and Berlin Space Technologies <cit.>. In particular, the aforementioned BCT XACT ADCS system has been demonstrated in-orbit pointing performance of 8 arcseconds (1–σ) on the MinXSS 3U CubeSat, this was independently verified by scientific instruments onboard. This level of pointing accuracy indicates that CubeSats can now seriously be considered for missions requiring precision pointing.The PICASSO ADCS system upon which the CQuCoM satellite is based provides <1^∘ pointing accuracy. A full system engineering analysis will determine whether this baseline level of pointing is sufficient for the CQuCoM mission, the BCT XACT platform is a viable alternative should higher accuracy coarse pointing be required. The ADCS utilizes a combination of sensors such as a 3-axis magnetometer to detect the strength and orientation of the Earth's magnetic field, and angular rate sensors to measure the rotational velocity of the satellite. To establish absolute attitude, coarse and fine Sun sensors are used when sunlit but during eclipse, when experimental transmission occurs these sensors are ineffective. Instead, a high precision star tracker is used to provide accurate 3-axis pointing knowledge. Attitude control is through a combination of magnetic torque actuators (magnetorquers or MTQs) interacting with the Earth's magnetic field, and reaction wheels. The MTQs are used for detumbling and for desaturating the reaction wheels. § QUANTUM SOURCES AND DETECTORS The CQuCoM proposal involves two missions with different quantum sources. The first mission will validate the transmission system. Numerical studies of the optical channel between space and ground predict a link loss of -30 or -40 dB for a spacecraft with a 10cm aperture at 500 km altitude and a 1m aperture at the optical ground station <cit.>. As CQuCoM will be at a lower altitude, it is imperative to establish first that the fine-pointing mechanism can overcome any residual atmospheric buffeting and greater traversal speed. The second mission would incorporate lessons learned from the first in performing the more challenging task of entanglement distribution.Currently, the CQuCoM proposal calls for two sequential missions. It is possible, however, to consider the possibility of combining both missions into a single spacecraft. This will require the spacecraft to be able to supply more resources. For one, an increased volume for accommodating both types of light sources must be available. It also makes the optical interfaces more challenging. §.§ Weak Coherent Pulse Source To conduct the space-to-ground test, the first mission will use a modulated laser transmitter whose intensity can be tuned to act either as a strong optical beacon, or as a weak coherent pulse (WCP) source, where the average number of photons per pulse is much less than one. When acting as the strong optical beacon, it is possible to use this light source to characterize the space-to-ground optical channel and to commission the fine-pointing mechanism <cit.>. When this is completed, the light source can be adjusted to become a polarisation-encoded WCP source that can carry out quantum key distribution using conventional prepare-and-send methods including decoy state protocols to prevent photon number splitting attacks.WCP sources are well developed and have been miniaturized to fit within hand-held devices (35× 20× 8mm^3 <cit.>) and represents a low-risk quantum signal source for the first mission. A true random number generator (RNG) would be required to guarantee security but the 1U set aside for the source should give ample payload margin [High speed quantum RNGs have been demonstrated with suitable SWaP characteristics. For example, in <cit.> random generation at 480Mb/s was shown in a 0.1U package consuming a few watts, easily scalable to 846Mb/s or even higher. A chipscale QRNG component operating at 1Gb/s has been reported <cit.>. For testing purposes, a pseudo-RNG could be used, or else random settings could be pre-computed.]. A baseline transmission rate of 100MHz with 0.5 photons/pulse should allow the generation of secure keys during a ground pass, with the option of increasing the rate to overcome additional link losses <cit.>. §.§ Entangled Source SPEQS The second CQuCoM mission will attempt entanglement-based QKD. The use of quantum entangled photon pairs has certain technical advantages over the more conventional prepare-and-send schemes. For example, a true random number generator is not required for the source as the measurement of entangled photons generate intrinsic randomness. Another interesting advantage is that the photon pairs, generated in a nonlinear optical process are created within femtoseconds of each other and it is possible to carry out time-stamping and correlation matching without the use of atomic clocks or GPS-type signals <cit.>. Thus, entanglement-based systems in space have other interesting technology applications beside QKD.The polarization-entangled source for CQuCoM is based on the Small Photon-Entangling Quantum System (SPEQS) currently designed and built at the National University of Singapore. The SPEQS devices, for the generation and detection of entangled photon pairs, are designed to be rugged and compact as it has to be contained within the size, weight and power (SWAP) constraints of nanosatellites <cit.>. A notable feature of SPEQS devices is that they appear to be incredibly rugged, with one copy surviving the explosion of a space launch vehicle intact and in good working order <cit.>. The first generation SPEQS devices have been space qualified, first through demonstration in near-space <cit.>, then formal testing after integration into nanosatellites, and finally through successful operation in orbit on the Galassia 3U CubeSat <cit.>.The polarization-entangled photon pairs are generated via spontaneous parametric downconversion (SPDC). The source geometry is based on collinear, Type-I, non-degenerate SPDC using bulk β-Barium Borate crystals for downconversion. The advantages of using BBO are that it is uniaxial and its optical properties (birefringence) are very temperaturetolerant. The single photons are currently detected by silicon Geiger-mode avalanche photodiodes (Si-APDs). Careful characterization studies show that the Type-I geometry enables a very robust set of pump and collection conditions that simultaneously achieve high pair rate (brightness) and a high pair-to-singles ratio. The length of the crystals is an important consideration. With fixed pump and collection beam parameters, the dependence of brightness on crystal length falls into two different regimes (see Fig. <ref>). A trade-off in the target brightness and size of the source needs to be made <cit.>.The entangled photon source that is being proposed for CQuCoM, called SPEQS-2, is currently being built at the NUS and is expected to consume about 10W of continuous power and to have a mass of about 500g <cit.>. A separate qualification mission is being planned and the satellite mission and the SPEQS-2 detailed design specifications are described in an accompanying article <cit.>. §.§ Single Photon Detectors Due to the large downlink transmission losses, achieving a high enough entangled pair coincidence rate between the OGS and the CubeSat requires a high pair-production rate onboard CQuCoM, consequently we need high-speed single photon counters. Si-APDs are baselined for the second mission but we would also investigate the use of more advanced solutions to allow for faster pair generation that could not be easily handled by conventional Si-APDs due to timing resolution, jitter, deadtime, or power limitations.Geiger-mode APDs or single-photon avalanche diodes (SPADs) can also be implemented in complementary metal-oxide semiconductor (CMOS) technologies, where the detectors are replicated in very large numbers on a single CMOS chip <cit.> and even in stacked CMOS chips <cit.>. The advantage is to be able to detect single-photons with very high single-photon time resolution in multiple locations, so as to minimize the dead time of the measurement. Another advantage of parallel detection is the capability of implementing multiple channels and thus incrementing the throughput of free-space quantum communications channels using space-division multiple access (SDMA) mechanisms. Thanks to Moore’s Law, it becomes possible to create complex digital signal processing on chip side-by-side with, or under the detectors, thus minimizing noise and jitter. Proximity of detection and processing maximizes compactness, while reducing power dissipation due to the lack of expensive and power-hungry drivers. This feature may be of significant value whenever power and space are in high demand, such as in satellites <cit.>. CMOS SPADs have also shown resilience to gamma radiation and proton bombardment at several energies and doses, thus proving their suitability for space applications <cit.>.We have developed large linear arrays of SPADs with a diameter of several microns that exhibit a single photon timing resolution better than 100ps and a dead time, individually, of several tens of nanoseconds <cit.>. The arrays are coupled with digital hardware including time-to-digital converters (TDCs) capable of resolutions better than 25ps and recharge periods shorter than 7.5ns. With these devices, it is possible to achieve overall deadtimes of several tens of picoseconds, while dissipating less than 100mW. Thanks to parallelism of SPADs and TDCs, large throughputs of up to 34Gb/s can thus be achieved, while generally only several Mb/s are exploited in single-photon communication.§ GROUND SEGMENT AND OPTICAL GROUND STATION Command and control of CQuCoM will be performed by a network of RF ground stations located in Glasgow (University of Strathclyde, mission control), Singapore (National University of Singapore), and Delft (TU Delft). The diversity of ground stations allows more frequent contact and greater opportunities for downlink of data. Mission control will also link with the OGS to co-ordinate experimental passes.The CQuCoM satellite will transmit quantum signals (WCP or single photons) to an optical ground station located at the Matera Laser Ranging Observatory (MLRO), Italy. This facility has already conducted proof-of-principle quantum communication experiments utilizing laser signals bounced off retroreflectors mounted on existing satellites <cit.>. Essentially the same experimental setup will be used for the CQuCoM mission with the addition of an 532nm optical beacon [MLRO has a two-colour laser rangefinding system at 532nm and 355nm. This suggests utilising the existing 532nm laser systems for the beacon and the 355nm laser for rangefinding to avoid interference.]. A radio link at the OGS will be used to communicate with and monitor the satellite during the experiments. The ADCS telemetry will be used to align the measurement bases by polarization control at the OGS. §.§ Pass analysis The baseline deployment from the ISS allows a preliminary determination of the orbital pass parameters for the selected OGS location of the MLRO. This is summarized in Fig. <ref>, in a 12 month period, there are approximately 150 opportunities to conduct experimental operations between a satellite in the orbit of the ISS and OGS with an average pass duration of 6 minutes. We restrict transmission to night time when the satellite is also in eclipse to reduce background light entering the OGS receiver either from scattering of sunlight from the atmosphere or from reflected light off the satellite itself.As the CubeSat has a lower ballistic co-efficient and does not carry any propellent to maintain altitude, the orbit will change and diverge from that of the ISS (which performs periodic orbit raising burns). At the initial deployment altitude of 400km, a slant range of 1000km corresponds to minimum 23^∘ elevation, so we restrict ourselves to passes that rise to at least 30^∘ to allow sufficient time to perform initial acquisition and tracking. Passes that rise higher, and consequently for longer, will be used for transmission experiments. As the orbit of the CubeSat decays, pass opportunities and durations will reduce, though this will be partially compensated by the reduction in range leading to higher count rates at the OGS. We aim to perform experimental operations down to at least 300km altitude, below which atmospheric drag will quickly deorbit the spacecraft. A minimum experimental lifetime of 12 months should be achievable based on deorbit analysis in Section <ref>. §.§ OGS Operations At the beginning of the transmission pass, the OGS would use orbital data, either two-line elements or GPS tracking data from onboard the CubeSat, to initialize the lock-on phase of the experiment. The OGS then sends rangefinding pulses that are sent back by retroreflectors mounted on the CubeSat allowing for both accurate distance determination (at the centimetre level) and tracking the precise direction of the CubeSat.Once the OGS has found the CubeSat in its field of view, it can baffle the region of sky seen by the detector to reduce background stray light. The OGS will also transmit a laser beacon towards the CubeSat to guide its fine-pointing system.The range information is used for time-of-flight timing correction between the transmitted pulses with measurements on the ground. For the WCP source, its pulsed nature allows windowing of the detection periods to reduce extraneous counts. This is not possible for the continuously pumped entangled photon source so coincidence matching will be used to precisely align the time-bases of the CubeSat with the OGS.The 800nm wavelength of the quantum downlink allows the use of easily available Si-APDs for the OGS detectors. Moderate cooling is sufficient to reduce dark counts to negligible levels.§ OPTICS AND FINE POINTING The main challenge of CQuCoM is the transmission of single photons from an orbital platform travelling at nearly 8kms^-1 to the OGS. The CubeSat dimensions restrict the size of the transmission optics and the low mass constrains the pointing stability of the craft [The MinXSS satellite has demonstrated 40μ rad (1-σ) coarse pointing performance after being deployed from the ISS <cit.>.]. The transmission telescope diameter of 90mm will lead to a different beam divergence depending on the source <cit.>. A WCP source allows a nearly flat wavefront to be transmitted leading to a divergence of 4μ rad (HWHM) whilst the entangled photon pair source requires a 65mm Gaussian beam waist to optimize diffraction against truncation loss and this leads to a divergence of 7.8μrad. The ground spot size varies from 3.2m (WCP source, Zenith) to 18m (entangled source, 20 degrees above horizon) for an orbital altitude of 400km leading to different geometric losses due to the finite collection aperture of the OGS. A fine-pointing specification of 3μ rad has been chosen to balance the pointing losses against developmental cost and effort. The gains from a smaller pointing error diminish as the inherent divergence of the beam and other effects dominate.The required pointing accuracy will be achieved by combining coarse (ADCS) and fine (OBT/BSM) pointing stages. The CubeSat will use 3-axis ADCS via reaction wheels for coarse pointing to aim its telescope at the OGS to within the acquisition FoV of the OBT to lock onto the OGS 532nm beacon laser. After initial lock, ADCS excursions up to the BSM FoV limit of several degrees can be accommodated.§.§ Transmission Optics The restricted size of a 6U CubeSat structure constrains the maximum optical aperture than can be easily employed. The use of deployable optics is being investigated by several groups <cit.> including at TU Delft with the Deployable Space Telescope project <cit.> together with TNO, ADS Leiden and ESA. However, a fixed optical system is attractive to minimize development risk. Planet employ 90mm Cassegrain-type reflector telescopes on their Dove 3U CubeSat constellation, 133 have been launched as of May 2016 <cit.> thus demonstrating considerable flight heritage of this type of CubeSat optical system [The optical system of the Planet “flocks” of “doves” has been refined over several generations: PS0 features a 2 element Maksutov Cassegrain optical system paired with an 11MP CCD detector. Optical elements are mounted relative to the structure of the spacecraft. PS1 features the same optical system as PS0, aligned and mounted in an isolated carbon fibre/titanium telescope. This telescope is matched with an 11MP CCD detector. PS2 features a five element optical system that provides a wider field of view and superior image quality. This optical system is paired with a 29MP CCD detector. <cit.>]. As a baseline, we allocate 2U to the transmission telescope and its basic specifications are Cassegrain-type, 90mm diameter primary mirror, and f=1400mm focal length. An athermal design can be used to minimize distortions due to temperature variations as the CubeSat moves into eclipse prior to any transmission experiment. The optical configuration will depend on the results of a trade-off study between manufacturing complexity/cost, optical performance, and compactness. Optical performance will depend on the ACDS coarse pointing accuracy that can be achieved as this drives the off-axis performance of the design to accommodate large BSM excursions. The combination of the optics, fine pointing and ADCS is an example of systems of systems engineering and this research would be an integral part of CQuCoM mission research and design. §.§ Beacon Tracker and Beam SteeringIncoming 532nm beacon light sent from the OGS is separated from the outgoing beampath using a dichroic mirror, sent through an insertable narrow bandpass filter, to reduce stray light, and onward to the beacon tracker consisting of a modified star tracker [A quadrant photodiode has typically been used in other beam steering experiments, or alternatively a 2-D tetra-lateral Position Sensitive Device (PSD). A modified startracker approach was chosen to allow for lock-on capture over a large field-of-view to mitigate against ADCS coarse pointing performance shortfalls. This also gives the possibility of obtaining imagery from the CubeSat for independent testing of pointing performance.]. During a frame, the defocussed image of the beacon is imaged onto a pixel array. The integration time is chosen to be short enough so that the image is not smeared. The deliberately defocussed point is spread across several pixels and the Gaussian intensity profile is determined from measurements of neighbouring pixels, a centroiding algorithm is then used to estimate the centre position of the beacon to sub-pixel accuracy. The accuracy by which this can be performed depends on the image signal to noise ratio (SNR) but better than 1/40-pixel precision is achievable for moderate levels of noise and 1/20-pixel for high levels of noise <cit.>.We will drive the OBT at a high frame rate (∼ 300Hz full array readout, ∼ kHz with region-of-interest readout) in order to reduces the beacon frame interval and the possibility of image smear. To achieve sufficient SNR, the beacon power can be increased.An attitude model for the satellite, with input from the OBT and high bandwidth inertial measurement units (IMUs), drives a beam steering mirror (BSM) for fine-pointing. Depending on the pass geometry and position of the satellite, the outgoing 800nm beam needs to be sent in a slightly different direction to the apparent position of the beacon due to velocity aberration [The Doppler shift does not pose a problem for this mission. At ISS orbital speed of 7.67km s^-1 the maximum wavelength shift is 0.02nm, much smaller than the 0.1nm bandpass of ultra-narrow interference filters used for straylight rejection.]. The magnitude of the point-ahead correction can reach up to 54μrad when passing over zenith. The ADCS is also sent the OBT/BSM offset so that the coarse pointing error can be closed, bringing the telescope boresight towards the beacon direction and reducing the possibility of the BSM exceeding its excursion limits.§.§ Pointing Errors Considering the interaction of the various sub-systems in determining pointing performance is a significant systems engineering challenge spanning all parties and disciplines. There are several potential sources of pointing error either leading to low frequency biases or high frequency noise in the transmitted beam direction.Low frequency drift will misalign the telescope bore axis from the OGS direction and if left unchecked could bring the deviation outside of the angular limits of the beam steering mechanism (BSM). As long as the coarse pointing system can keep the optical boresight to within these limits, the final pointing performance will be determined mainly by the fine pointing mechanism. This will be mainly impacted by high frequency noise leading to jitter or beam wandering. A high optical OBT detection bandwidth is essential for rejecting this source of noise. Noise with higher frequency components than the OBT frame rate can be tackled by the IMUs and blended rate sensor fusion to compensate for any motion occurring in-between frames of the OBT <cit.>. Quantum communication experiments have achieved a few μ rad accuracy under demanding conditions such as in a propeller driven airplane <cit.> or lofted on a hot air balloon <cit.>. The more benign microgravity environment and lower vibrational background of a space-based experiment should allow at least as good performance and we consider residual effects that may affect pointing performance.§.§.§ Solar pressure, Residual Magnetic Moment, and Gravity Gradient Even though the CubeSat is nominally in freefall and in a vacuum, it will be subject to external perturbations that can cause the beam to wander <cit.>. The relative magnitude of these forces depends on the orbital altitude. In LEO, the main effects will be due to residual atmospheric density, gravity gradient, and magnetic interactions. We may ignore the effect of solar radiation pressure as transmission experiments will be conducted in eclipse. The interaction of any residual magnetic moment of the satellite with the Earth's field will cause a bias torque. The gravity gradient will produce a tidal force leading to a restoring torque aligning the satellite with its long axis in the nadir direction. Both magnetic dipole and gravity gradient effects can be minimized by careful design of the CubeSat. These quasi-static influences are easily compensated by the ADCS system and should have minimal effect on the fine-pointing mechanism.§.§.§ Atmospheric Buffeting A source of random torque will be the effect of residual atmospheric density in low Earth orbit <cit.>. A CubeSat at this altitude experiences free-molecular flow and is potentially subject to buffeting, especially from cross-track winds at high latitudes <cit.>. The induced torque due to imbalanced forces can be minimized by locating the centre of gravity close to the centre of pressure when in the relevant orientation to reduce the moment arm. During the quantum transmission phase, the satellite is oriented to present the minimal projected area, i.e. the 2U-3U faces. The lack of deployable solar arrays is advantageous from this respect. Data from the MinXSS mission deployed from the ISS constrains the effect to below 50μ rad for the 3U CubeSat with deployables <cit.>.§.§.§ Vibration The momentum wheels are a potential source of vibration than could affect the pointing accuracy of the beam steering system. A key development goal would be to characterize ADCS hardware bias off-sets and noise spectra to assist in performance modelling <cit.>, e.g. using coloured noise instead of white noise normally assumed in most simulations and incorporating reaction wheel essential spin-axis instabilities that they may exhibit. TU Delft have experience with these challenges through their CubeSats projects Delfi-C3 and DelfiN3Xt. The BRITE CubeSat missions for photometry also require highly accurate and stable precision pointing systems and have studied the effect of ADCS vibration <cit.>. Through careful component selection and modelling, the effect of wheel imbalances can be minimized and in this way TUGSat-1 (BRITE-Austria) has achieved an in-orbit demonstration of 50μ rad using only body pointing and without beam steering <cit.>.To minimise vibration and enhance spacecraft agility, the ADCS can be operated in a zero momentum mode where the speed of the wheels is low <cit.>. The operational procedure would be dump excess momentum using the MTQs prior to the transmission phase where the wheels are used to provide attitude control. This requires the use of micro-reaction wheels that can support this mode of operation, especially repeated zero-crossings.§.§.§ Atmospheric scattering, absorption, and distortion The passage of light through the atmosphere is subject to various effects that will reduce the intensity of the received signal. The main sources of error are scattering and absorption of light from the beam and beam wander due to turbulence. Scattering and absorption can be minimized by choice of wavelength and operating conditions. Light at 800nm is transmitted through clear air with moderate absorption or scattering [For example, 70% of light will be transmitted from space to sea level at 20^∘ from zenith <cit.>]. Cloud or other particulates will degrade the channels so clear conditions will be necessary for transmission experiments.Wavefront distortion due to spatio-temporal variation of refractive index due to turbulence leads to beam wander [As the beam is small, we are mainly concerned with wavefront tilt rather than higher order perturbations so more complex adaptive optics is not required.], the same effect that limits astronomical seeing. The shower curtain effect <cit.> means that the beam wander for an orbit to ground transmission will be smaller that for a ground to space transmission for the same atmospheric turbulence <cit.>. Since the optical beacon and downlink photons take similar paths, separated by the velocity aberration angle, this will partially cancel out the effect of beam wander as long as the OBT detection and BSM bandwidth is greater than the timescale of the turbulence. The magnitude of the nearly common path rejection will depend on the size of the turbulent cells compared with the beam displacement between up and down-going beams which, at the top of the stratosphere, is a maximum of 3m at zenith and reduces to zero as the satellite approaches the horizon.An additional effect is dispersion of the different wavelengths of the beacon and downlink photons leading to angular differences as they pass through the atmosphere. This will lead to a quasi-static correction to the computed velocity aberration point-ahead of the downlink from the observed OBT position, but also variation in the respective deflections due to turbulence that will be more difficult to compensate. The static dispersion of ∼ 4μ rad displacement between the upgoing 532nm and downcoming 800nm beams is greatest at low elevations <cit.>. A correction can be included with the point-ahead compensation [The velocity aberration is maximal when the dispersion displacement is minimum at zenith, and vice versa near the horizon].§ MISSIONS CQuCoM calls for two missions, the first to derisk the pointing mechanism with a high brightness transmission source that can also be used for WCP QKD, and a second mission to distribute entanglement between space and ground. The mission profiles for both are broadly similar. A launch broker such as Nanoracks <cit.> will be contracted to handle orbital deployment [Spaceflight Industries <cit.> can broker deployment on a variety of launchers and in different orbits allowing for some flexibility on mission planning should the ISS orbit not be suitable.]. The CQuCoM satellites will first be carried up to the ISS on a regular resupply mission (Dragon, Cygnus, HTV, ATV, Progress and Soyuz) and then deployed into orbit using the NanoRacks CubeSat Deployer (NRCSD) mounted upon the Japanese Experimental Module Remote Manipulator System (JEMRMS). §.§ In-Orbit Operations After switch-on and detumbling, the satellite will initiate basic housekeeping procedures such as charging the batteries, establishing contact with ground control, and monitoring onboard systems. The performance of the ADCS will be verified and tests of ground target tracking can be performed in daylight using the OBT imager with the narrow bandpass filter removed. An option for an adjustable defocus for the OBT will be investigated for imaging purposes as opposed for centroiding.Initial night passes over the OGS will verify both satellite and beacon acquisition and tracking as well as the operation of the realtime telemetry downlink. The first mission with a tunable WCP source will allow sighting-in of the OBT/BSM, in particular to check that alignment of the incoming and outgoing beam paths have not deviated from that determined by pre-flight ground tests, e.g. by using a spiral search pattern of the BSM. For the second mission with the entangled source, it may still be possible to pick out the single photon flux from the satellite during a slow spiral pattern assuming small shifts in the boresight alignment. The results of the first mission will be vital in determining the effect of launch and orbital environmental conditions on the alignment.Once the in-orbit optical system parameters have been calibrated, quantum transmission tests can begin. These will be conducted in eclipse (local night) when weather conditions are clear and the orbital track passes close enough to the OGS, rising at least 30^∘ above the horizon. As the satellite begins to rise above the horizon, it will use ADCS to point the telescope towards the expected position of the OGS. Conversely, the OGS will track the satellite as it appears. Laser rangefinding pulses will provide precise position information for the OGS and it can begin transmitting the laser beacon. The satellite uses the beacon to operate the fine-pointing system. Once the OBT is locked onto the beacon, the source can start transmitting quantum signals to the OGS. Telemetry from the satellite to the OGS will transmit orientation information from the ADCS system allowing the alignment of the OGS polarization measurement bases with those being transmitted. The entanglement source has the option of actively adjusting its own polariser analysis settings based on onboard orientation information leaving the OGS settings fixed.Synchronisation of CubeSat source and OGS receiver events can be performed via GPS timing signals and post-transmission processing using the ranging information determined from the retroreflected laser pulses. Synchronisation can also be performed through modulation of the beacon signal and a separate photodiode. To reduce the amount of information needed to be stored and transmitted by the CubeSat, the OGS can communicate detection events, only the corresponding onboard data (WCP random signal settings or detection events for the entangled source) in the temporal vicinity need can be retained. The OGS detection rate (signal plus background and dark counts) will be in the range 10^3 s^-1 to 10^4 s^-1 due to channel losses and, in principle, only the coincident events need be processed or downloaded.If the notification of OGS events is done in realtime to the CubeSat, either through the S-band uplink or laser beacon, this minimizes the total amount of onboard storage than needs to be provided [E.g. a ring buffer could be used to temporarily store onboard signals or timing data and only coincident events would be copied out to main storage. In practice, a range of data around the OGS events would be copied out to guard against timing inaccuracies and to assist post-transmission analysis and synchronization.]. However, for scientific purposes it would be beneficial to store the entire onboard record during a pass and download for ground analysis. Due to the high source rate, this will result in several GB of data that needs to be downlinked [For a 400s quantum transmission pass (which is optimistic), a 100MHz WCP source will require ∼ 10^11 bits (4× 10^10 signals and 4 bits/signal if using decoy states). For a 5Mpcs continuously pumped entangled photon source, we require 20 bits timing information per detection event leading to 4× 10^10 bits per pass. We thus assume 20GB of onboard signal or timestamp data per pass.]. High speed X-band CubeSat transmitters are now commercially available and in use allowing large amounts of data to be downloaded from orbit. The company Planet reports 4.2GB downloaded during a typical groundstation pass from 3U CubeSats using COTS communications equipment <cit.>. With 3 groundstations and several passes per station per day, the data generated from a single quantum transmission experiment should downloadable within a day. The S-band uplink can be used for post-quantum-transmission reconciliation and processing of the coincident event data, e.g. sifting, error correction, and privacy amplification, if required for QKD demonstration.§.§ DecommissioningSpace debris is a major issue for any satellite mission and satellites should be designed to de-orbit within 25 years of launch <cit.> and by design CQuCoM should meet this directive. If an orbital altitude beyond 500km is chosen, either due to launch opportunity or reduction in atmospheric buffeting, then meeting the 25 year de-orbit directive may require additional mechanisms, increasing mass, developmental effort, and cost. A deployment below 500km simplifies the decommissioning task as the satellite will passively de-orbit in a relatively short period. A typical CubeSat deployed at the altitude of the International Space Station will have an orbital lifetime ranging from months to a few years.In order to demonstrate the potential de-orbit period of a 10kg 6U CubeSat, it is assumed that the CubeSat would be in minimum drag configuration (i.e. minimum projected area) and that the CubeSat would be launched from the ISS in Q1 of 2018. The method developed by Kerr and Macdonald <cit.> was used to calculate the de-orbit period and the results are presented in Fig. <ref>. It can be seen that if the CubeSat were deployed at the maximum ISS altitude of 450km, even in the case where cycles 25 and 26 are of very low intensity, the de-orbit period is approximately 11 years. However, in the case of an extended period of zero activity and deployment from 450km, the CubeSat lifetime will exceed the 25 year best practice rule. In periods of low solar activity the ISS can maintain a lower altitude but in periods of high solar activity, a higher altitude is chosen to reduce drag. However an upper limitation on the orbit exists due to the operating limits of the spacecraft which rendezvous with the ISS. In practice, we would expect that during periods of low or no solar activity the ISS would be at the lower range of its altitude range and the 25 year de-orbit limit can be met. § CONCLUSION AND OUTLOOK CubeSats offer the potential to accelerate the development of quantum technologies in space by offering reliable, and cost-effective platforms for conducting in-orbit technology demonstrations. The cost-effectiveness of CubeSats is derived from the standard containers used to ship and deploy CubeSats. This has led to the ability to share launch costs between a large number of users. At the same time, advances in micro-electronics and RF communication have enabled many advanced experiments to be operable remotely, using only COTS components. Together, these advances have made in-orbit experiments accessible to university groups and consortia that were not space users, even a decade previously.Some physical parameters, such as aperture-size and diffraction-losses, that are associated with optical systems are expected to become relatively more important requirement drivers of an experiment system design. However, this is an advantage as it means that from a systems engineering perspective, there is now greater flexibility in how to put together a space-based quantum experiment. With these positive developments, we can look forward to more nanosatellite sized experiments that act either as path-finders for more advanced experiments, or to actually execute the actual scientific experiments. The CQuCoM proposal combines the aforementioned advantages for advanced missions that are at the leading edge of small satellite capabilities.§ CQUCOM CONSORTIUM The CQuCoM consortium consists of: University of Strathclyde Co-ordination, Mission OperationsAustrian Academy of Sciences Mission Planning, Scientific OversightClyde Space Ltd Platform Engineering and TestingTechnical University of Delft Optical Design, ADCS Design and AlgorithmsLudwig-Maximilian University Fine-pointing system and WCP SourceUniversity of Padua Optical Ground Station (MLRO), in collaboration with ASI - Italian Space AgencyNational University of Singapore Entanglement Source and Data Handling § COMPETING INTERESTSThe authors declare that they have no competing interests.§ AUTHOR'S CONTRIBUTIONSThe original CQuCoM concept was conceived jointly by DO, AL, and SG. The other authors are involved either in the development of the mission concept, subsystems, or procedures. All authors read and approved the final manuscript.§ ACKNOWLEDGEMENTSDKLO acknowledges QUISCO (Quantum Information Scotland Network). DKLO and AL received funding from the European Union’s Seventh Framework Programme for research, technological development and demonstration under grant agreement No.611014 (CONNECT2SEA-4). AL acknowledges support from the National Research Foundation Singapore (NRF-CRP12-2013-02). EC acknowledges support from the Swiss National Science Foundation.bmc-mathphys 93 #1ISBN #1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1et al.#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1<><#>1#1#1#1#1#1#1#1#1#1#1#1#1#1PreBibitemsHookboone2015entanglement Boone, K., Bourgoin, J.-P., Meyer-Scott, E., Heshami, K., Jennewein, T., Simon, C.: Entanglement over global distances via quantum repeaters with satellite links. Physical Review A 91(5), 052325 (2015) bacsardi2013way Bacsardi, L.: On the way to quantum-based satellite communication. Communications Magazine, IEEE 51(8), 50–55 (2013) ursin2007entanglement Ursin, R., Tiefenbacher, F., Schmitt-Manderbach, T., Weier, H., Scheidl, T., Lindenthal, M., Blauensteiner, B., Jennewein, T., Perdigues, J., Trojek, P., : Entanglement-based quantum communication over 144 km. Nature physics 3(7), 481–486 (2007) nauerth2013air Nauerth, S., Moll, F., Rau, M., Fuchs, C., Horwath, J., Frick, S., Weinfurter, H.: Air-to-ground quantum communication. Nature Photonics 7(5), 382–386 (2013) pugh2016airborne Pugh, C.J., Kaiser, S., Bourgoin, J.-P., Jin, J., Sultana, N., Agne, S., Anisimova, E., Makarov, V., Choi, E., Higgins, B.L., et al.: Airborne demonstration of a quantum key distribution receiver payload. arXiv preprint arXiv:1612.06396 (2016) villoresi2008experimental Villoresi, P., Jennewein, T., Tamburini, F., Aspelmeyer, M., Bonato, C., Ursin, R., Pernechele, C., Luceri, V., Bianco, G., Zeilinger, A., : Experimental verification of the feasibility of a quantum channel between space and earth. New Journal of Physics 10(3), 033038 (2008) yin2013experimental Yin, J., Cao, Y., Liu, S.-B., Pan, G.-S., Wang, J.-H., Yang, T., Zhang, Z.-P., Yang, F.-M., Chen, Y.-A., Peng, C.-Z., : Experimental quasi-single-photon transmission from satellite to earth. Optics express 21(17), 20032–20040 (2013) vallone2014experimental Vallone, G., Bacco, D., Dequal, D., Gaiarin, S., Luceri, V., Bianco, G., Villoresi, P.: Experimental satellite quantum communications. Physical Review Letters 115(4), 040502 (2015) wang2013direct Wang, J.-Y., Yang, B., Liao, S.-K., Zhang, L., Shen, Q., Hu, X.-F., Wu, J.-C., Yang, S.-J., Jiang, H., Tang, Y.-L., : Direct and full-scale experimental verifications towards ground-satellite quantum key distribution. Nature Photonics 7(5), 387–393 (2013) bourgoin2015free Bourgoin, J.-P., Higgins, B.L., Gigov, N., Holloway, C., Pugh, C.J., Kaiser, S., Cranmer, M., Jennewein, T.: Free-space quantum key distribution to a moving receiver. Optics Express 23(26), 33437–33447 (2015) Morong2012 Morong, W., Ling, A., Oi, D.K.L.: Quantum optics for space platforms. Optics & Photonics News 23, 42–29 (2012) NanoQEY2014 Jennewein, T., Grant, C., Choi, E., Pugh, C., Holloway, C., Bourgoin, J., Hakima, H., Higgins, B., Zee, R.: The NanoQEY mission: ground to space quantum key and entanglement distribution using a nanosatellite. In: SPIE Security+ Defence, p. 925402 (2014). International Society for Optics and Photonics scheidl2013quantum Scheidl, T., Wille, E., Ursin, R.: Quantum optics experiments using the international space station: a proposal. New Journal of Physics 15(4), 043008 (2013) scheidl2012space Scheidl, T., Ursin, R.: Space-quest. quantum communication using satellites. In: Proceedings of the International Conference on Space Optical Systems and Applications (ICSOS) (2012) elser2015satellite Elser, D., Gunthner, K., Khan, I., Stiller, B., Marquardt, C., Leuchs, G., Saucke, K., Trondle, D., Heine, F., Seel, S., : Satellite quantum communication via the alphasat laser communication terminal-quantum signals from 36 thousand kilometers above earth. In: 2015 IEEE International Conference on Space Optical Systems and Applications (ICSOS), pp. 1–4 (2015). IEEE wu2014strategi WU, J., SUN, L.: Strategic priority program on space science. Space Science Activities in China 5, 001 (2014) quesslaunch China launches first-ever quantum communication satellite. http://news.xinhuanet.com/english/2016-08/16/c_135601026.htm (2016) reich2011troubled Reich, E.S.: Troubled probe upholds Einstein. Journal of Modern Physics 2(4), 210–218 (2011) JWTICRP NASA: James Webb Space Telescope Independent Comprehensive Review Panel Final Report. (2010). NASA heidt2000cubesat Heidt, H., Puig-Suari, J., Moore, A., Nakasuka, S., Twiggs, R.: Cubesat: A new generation of picosatellite for education and industry low-cost space experimentation. 14th Annual/USU Conference on Small Satellites (SSC00-V-5) (2000) shao2013performance Shao, A., Koltz, E.A., Wertz, J.R.: Performance based cost modeling: Quantifying the cost reduction potential of small observation satellties. In: AIAA Reinventing Space Conference, AIAA-RS-2013-1003, Los Angeles, CA, Oct, pp. 14–17 (2013) wuerl2015lessons Wuerl, A., Wuerl, M.: Lessons learned for deploying a microsatellite from the international space station. In: Aerospace Conference, 2015 IEEE, pp. 1–12 (2015). IEEE latt2014estcube Lätt, S., Slavinskis, A., Ilbis, E., Kvell, U., Voormansik, K., Kulu, E., Pajusalu, M., Kuuste, H., Sünter, I., Eenmäe, T., : Estcube-1 nanosatellite for electric solar wind sail in-orbit technology demonstration. Proceedings of the Estonian Academy of Sciences 63(2), 200 (2014) muylaert2009qb50 Muylaert, J., Reinhard, R., Asma, C., Buchlin, J., Rambaud, P., Vetrano, M.: Qb50: an international network of 50 cubesats for multi-point, in-situ measurements in the lower thermosphere and for re-entry research. In: ESA Atmospheric Science Conference, Barcelona, Spain, pp. 7–11 (2009) foster2015orbit Foster, C., Hallam, H., Mason, J.: Orbit determination and differential-drag control of planet labs cubesat constellations. arXiv preprint arXiv:1509.03270 (2015) sarda2010canadian Sarda, K., Grant, C., Eagleson, S., Kekez, D.D., Zee, R.E.: Canadian advanced nanospace experiment 2 orbit operations: two years of pushing the nanosatellite performance envelope. In: ESA Small Satellites, Services and Systems Symposium (2010) Swartout2015 Swartout, M.: CubeSat Database. https://sites.google.com/a/slu.edu/swartwout/home/cubesat-database (2015) CPCubeSats2016 Oi, D.K.L., Ling, A., Grieve, J.A., Jennewein, T., Dinkelaker, A.N., Krutzik, M.: Nanosatellites for quantum science and technology. Contemporary Physics 58(1), 25–52 (2017) Hevner2011 Hevner, R., Holemans, W., Puig-Suari, J., Twiggs, R.: An Advanced Standard for CubeSats. DigitalCommons@USU (2011) swartwout2014first Swartwout, M.: The first one hundred cubesats: A statistical look. Journal of Small Satellites 2(2), 213–233 (2014) spaceflight Spaceflight Industries. <http://www.spaceflight.com/> TsitasKingston2012 Tsitas, S.R., Kingston, J.: 6U CubeSat commercial applications. Aeronautical Journal 116(1176), 189–198 (2012) turner2010nps Turner, C.G.: NPS TINYSCOPE program management. Technical report, DTIC Document (2010) agasid2010study Agasid, E., Rademacher, A., McCullar, M., Gilstrap, R.: Study to Determine the Feasibility of a Earth Observing Telescope Payload for a 6U Nano Satellite. http://www.nsbe-hsc.org/cdst_feasability_report.pdf (2010) straub2012smallsat Straub, J., Fevig, R., Borzych, T., Church, C., Holmer, C., Hynes, M., Komus, A.: From smallsat to 6u cubesat: A case study in size and mass reduction. In: ACSER 6U CubeSat Low Cost Space Missions Workshop (2012) skrobot2016cubesat Skrobot, G.: CubeSat Missions to LEO and Beyond. InSight (AV) 2, 2 (2016) mero2015picasso Mero, B., Quillien, K.A., McRobb, M., Chesi, S., Marshall, R., Gow, A., Clark, C., Anciaux, M., Cardoen, P., De Keyser, J., et al.: Picasso: A state of the art cubesat. 29th Annual AIAA/US Conference on Small Satellites (SSC15-III-2) (2015) ISIS6U ISIS: 6U Structure. http://www.isispace.nl/cms/index.php/news/latest-news/120-introducing-the-isis-6u-structure (2015) Pumpkin6U Pumpkin: Supernova 6U Structure. http://space.skyrocket.de/doc_sdat/supernova-beta.htm (2015) syrlinks Syrlinks: Very High Data Rate Transmitter in X-Band for CubeSat and NanoSatellites. http://www.syrlinks.com/en/products/cubesats/hdr-x-band-transmitter.html (2015) kahr2011gps Kahr, E., Montenbruck, O., O’Keefe, K., Skone, S., Urbanek, J., Bradbury, L., Fenton, P.: GPS tracking on a nanosatellite the CanX-2 flight experience. In: 8th International ESA Conference on Guidance, Navigation & Control Systems, Karlovy Vary, Czech Republic, June, pp. 5–10 (2011) mason2016minxss Mason, J., Baumgart, M., Woods, T., Hegel, D., Rogler, B., Stafford, G., Solomon, S., Chamberlin, P.: MinXSS CubeSat On-Orbit Performance and the First Flight of the Blue Canyon Technologies XACT 3-axis ADCS. 30th Annual AAIA/USU Conference on Small Satellites (2016) MAI Maryland Aerospace Industries. http://maiaero.com/ BST Berlin Space Technologies. http://www.berlin-space-tech.com/ nauerth2013airthesis Nauerth, S.: Air to ground quantum key distribution. PhD thesis, LMU (2013) melen2016integrated Mélen, G.: Integrated quantum key distribution sender unit for hand-held platforms. PhD thesis, LMU (2016) shi2016random Shi, Y., Chng, B., Kurtsiefer, C.: Random numbers from vacuum fluctuations. Applied Physics Letters 109(4), 041101 (2016) abellan2016quantum Abellan, C., Amaya, W., Domenech, D., Muñoz, P., Capmany, J., Longhi, S., Mitchell, M.W., Pruneri, V.: Quantum entropy source on an inp photonic integrated circuit for random number generation. Optica 3(9), 989–994 (2016) bourgoin2015experimental Bourgoin, J.-P., Gigov, N., Higgins, B.L., Yan, Z., Meyer-Scott, E., Khandani, A.K., Lütkenhaus, N., Jennewein, T.: Experimental quantum key distribution with simulated ground-to-satellite photon losses and processing limitations. Physical Review A 92(5), 052339 (2015) ho09 Ho, C., Lamas-Linares, A., Kurtsiefer, C.: Clock synchronization by remote detection of correlated photon pairs. New Journal of Physics 11(4), 045011 (2009) cheng15 Cheng, C., Chandrasekara, R., Tan, Y.C., Ling, A.: Space qualified nanosatellite electronics platform for photon pair experiments. IEEE Journal of Lightwave Technology (2015) tang15 Tang, Z., Chandrasekara, R., Tan, Y.C., Cheng, C., Durak, K., Ling, A.: The photon pair source that survived a rocket explosion. Scientific reports 6 (2016) tang14 Tang, Z., Chandrasekara, R., Sean, Y.Y., Cheng, C., Wildfeuer, C., Ling, A.: Near-space flight of a correlated photon system. Scientific Reports 4, 6366 (2014). tang2016generation Tang, Z., Chandrasekara, R., Tan, Y.C., Cheng, C., Sha, L., Hiang, G.C., Oi, D.K., Ling, A.: Generation and analysis of correlated pairs of photons aboard a nanosatellite. Physical Review Applied 5(5), 054022 (2016) chandrasekara2016correlated Chandrasekara, R., Tang, Z., Tan, Y., Cheng, C., Sha, L., Hiang, G., Oi, D., Ling, A.: Correlated photon pairs in low earth orbit. In: SPIE Security+ Defence, p. 99960 (2016). International Society for Optics and Photonics septriani2016thick Septriani, B., Grieve, J.A., Durak, K., Ling, A.: Thick-crystal regime in photon pair sources. Optica 3(3), 347–350 (2016) chandrasekara15_spie2 Chandrasekara, R., Zhongkan, T., Chuan, T.Y., Cheng, C., Septriani, B., Durak, K., Grieve, J.A., Ling, A.: Deploying quantum light sources on nanosatellites i: lessons and perspectives on the optical system. Proc. SPIE 9615, Quantum Communications and Quantum Imaging XIII, 96150S (2015) bedington2016nanosatellite Bedington, R., Bai, X., Truong-Cao, E., Tan, Y.C., Durak, K., Zafra, A.V., Grieve, J.A., Oi, D.K., Ling, A.: Nanosatellite experiments to enable future space-based QKD missions. EPJ Quantum Technology 3(1), 12 (2016) veerappan2016low Veerappan, C., Charbon, E.: A Low Dark Count pin Diode Based SPAD in CMOS Technology. IEEE Transactions on Electron Devices 63(1), 65–71 (2016) pavia20151 Pavia, J.M., Scandini, M., Lindner, S., Wolf, M., Charbon, E.: A 1× 400 backside-illuminated SPAD sensor with 49.7 ps resolution, 30 pJ/sample TDCs fabricated in 3D CMOS technology for near-infrared optical tomography. IEEE Journal of Solid-State Circuits 50(10), 2406–2418 (2015) charbon2014single Charbon, E.: Single-photon imaging in complementary metal oxide semiconductor processes. Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences 372(2012), 20130100 (2014) maruyama20141024 Maruyama, Y., Blacksberg, J., Charbon, E.: A 1024× 8 700-ps time-gated SPAD line sensor for planetary surface exploration with laser Raman spectroscopy and LIBS. IEEE Journal of Solid-State Circuits 49(1), 179–189 (2014) charbon2010radiation Charbon, E., Carrara, L., Niclass, C., Scheidegger, N., Shea, H.: Radiation-tolerant CMOS single-photon imagers for multiradiation detection. Technical report, CRC Press (2010) burri2016linospad Burri, S., Homulle, H., Bruschini, C., Charbon, E.: LinoSPAD: a time-resolved 256× 1 CMOS SPAD line sensor system featuring 64 FPGA-based TDC channels running at up to 8.5 giga-events per second. In: SPIE Photonics Europe, p. 98990 (2016). International Society for Optics and Photonics Vallone2016PRL Vallone, G., Dequal, D., Tomasin, M., Vedovato, F., Schiavon, M., Luceri, V., Bianco, G., Villoresi, P.: Interference at the single photon level along satellite-ground channels. Phys. Rev. Lett. 116, 253601 (2016). bourgoin2013comprehensive Bourgoin, J., Meyer-Scott, E., Higgins, B.L., Helou, B., Erven, C., Huebel, H., Kumar, B., Hudson, D., D'Souza, I., Girard, R., : A comprehensive design and performance analysis of low earth orbit satellite quantum communication. New Journal of Physics 15(2), 023006 (2013) schwartz2016segmented Schwartz, N., Pearson, D., Todd, S., Vick, A., Lunney, D., MacLeod, D.: A segmented deployable primary mirror for earth observation from a cubesat platform. 30th Annual AAIA/USU Conference on Small Satellites (SSC16-WK-3) (2016) andersen2016falconsat Andersen, G., Asmolova, O., McHarg, M.G., Quiller, T., Maldonado, C.: Falconsat-7: a membrane space solar telescope. In: SPIE Astronomical Telescopes+ Instrumentation, p. 99041 (2016). International Society for Optics and Photonics champagne2014cubesat Champagne, J., Hansen, S., Newswander, T., Crowther, B.: Cubesat image resolution capabilities with deployable optics and current imaging technology. 28th Annual AAIA/USU Conference on Small Satellites (SSC14-VII-2) (2014) agasid2013collapsible Agasid, E., Ennico-Smith, K., Rademacher, A.: Collapsible space telescope (cst) for nanosatellite imaging and observation. 27th Annual AAIA/USU Conference on Small Satellites (SSC13-III-4) (2013) dolkens2015deployable Dolkens, D.: A deployable telescope for sub-meter resolutions from microsatellite platforms. PhD thesis, TU Delft, Delft University of Technology (2015) colton2016supporting Colton, K., Klofas, B.: Supporting the flock: Building a ground station network for autonomy and reliability. 30th Annual AAIA/USU Conference on Small Satellites (SSC16-IX-05) (2016) planet Planet Spacecraft Operations and Ground Control Ver.1.2, September 2015. <https://www.planet.com/docs/spec-sheets/spacecraft-ops/> delabie2013accurate Delabie, T., Vandenbussche, B., Schutter, J.: An accurate and efficient gaussian fit centroiding algorithm for star trackers. In: AAS/AIAA Space Flight Mechanics Meeting (475) (2013) ortiz2001sub Ortiz, G.G., Lee, S., Alexander, J.W.: Sub-microradian pointing for deep space optical telecommunications network. In: 19th AIAA Int. Comms Satellite Systems Conf., Toulouse, France, pp. 1–16 (2001) gutierrez2011line Gutierrez, H.L., Gaines, J.D., Newman, M.R.: Line-of-sight stabilization and back scanning using a fast steering mirror and blended rate sensors. In: Infotech@ Aerospace 2011, p. 1659 (2011) Udrea2013 Udrea, B., Nayak, M., F., A.: Analysis of the Pointing Accuracy of a 6U CubeSat for Proximity Operations and RSO Imaging. 5th International Conference on Spacecraft Formation Flying Missions and Technologies, Munich, Germany (2013) lyle1971spacecraft Lyle, R., Stabekis, P.: Spacecraft aerodynamic torques. NASA SP-8058, January (1971) garcia2014atmospheric Garcia, R.F., Doornbos, E., Bruinsma, S., Hebert, H.: Atmospheric gravity waves due to the tohoku-oki tsunami observed in the thermosphere by GOCE. J Geophysical Research: Atmospheres 119(8), 4498–4506 (2014) hegel2016flexcore Hegel, D.: Flexcore: Low-cost attitude determination and control enabling high-performance small spacecraft. 30th Annual AAIA/USU Conference on Small Satellites (SSC16-X-7) (2016) BRITEADCS Sanders, D.S., Heater, D.L., Peeples, S.R., Sykes, J.K.: Pushing the limits of cubesat attitude control: A ground demonstration. 27th Annual AIAA/US Conference on Small Satellites (SSC13-III-10) (2013) weiss2014brite Weiss, W., Rucinski, S., Moffat, A., Schwarzenberg-Czerny, A., Koudelka, O., Grant, C., Zee, R., Kuschnig, R., Matthews, J., Orleanski, P., : Brite-constellation: Nanosatellites for precision photometry of bright stars. Publications of the Astronomical Society of the Pacific 126(940), 573–585 (2014) sarda2016three Sarda, K., Grant, C.C., Zee, R.E.: Three stellar years (and counting) of precision photometry by the brite astronomy constellation. 30th Annual AAIA/USU Conference on Small Satellites (SSC16-III-07) (2016) steyn1999attitude Steyn, W., Hashida, Y.: An attitude control system for a low-cost earth observation satellite with orbit maintenance capability. 13th Annual AAIA/USU Conference on Small Satellites (SSC99-XI-04) (1999) clark1999spectroscopy Clark, R.N., : Spectroscopy of rocks and minerals, and principles of spectroscopy. Manual of remote sensing 3, 3–58 (1999) modtran MODTRAN. <http://modtran.spectral.com/> yura1979signal Yura, H.: Signal-to-noise ratio of heterodyne lidar systems in the presence of atmospheric turbulence. Journal of Modern Optics 26(5), 627–644 (1979) dror1998experimental Dror, I., Sandrov, A., Kopeika, N.S.: Experimental investigation of the influence of the relative position of the scattering layer on image quality: the shower curtain effect. Applied optics 37(27), 6495–6499 (1998) Aoki2014Manual Aoki, W., Hełminiak, K., Tajitsu, A.: Subaru telescope high dispersion spectrograph user manual v.2.0.0 (2014) nanoracks Nanoracks. <http://nanoracks.com/> klofas2016 Klofas, B.: Planet labs ground station network. In: 13th Annual CubeSat Developers Workshop (2016). Cal Poly SLO. http://mstl.atl.calpoly.edu/~bklofas/Presentations/DevelopersWorkshop2016/ deorbit Inter-Agency Space Debris Coordination Committee. http://www.iadc-online.org/ (2015) kerr2015general Kerr, E., Macdonald, M.: A general perturbations method for spacecraft lifetime analysis. In: 25th AAS/AIAA Space Flight Mechanics Meeting (2015)
http://arxiv.org/abs/1704.08707v1
{ "authors": [ "Daniel KL Oi", "Alex Ling", "Giuseppe Vallone", "Paolo Villoresi", "Steve Greenland", "Emma Kerr", "Malcolm Macdonald", "Harald Weinfurter", "Hans Kuiper", "Edoardo Charbon", "Rupert Ursin" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20170427182021", "title": "CubeSat quantum communications mission" }
equationsectiontheoremTheorem thm[theorem]Theorem corollary[theorem]Corollary lemma[theorem]Lemma proposition[theorem]Proposition prop[theorem]Proposition lem[theorem]Lemma cor[theorem]Corollary remark[theorem]Remark definition[theorem]Definition defn[theorem]Definition remarknotation[theorem]Notation rst #1 #2other#1#1http://www.ams.org/mathscinet-getitem?mr=#1MR#1 𝒵 𝒴 𝒳 𝒲 𝒱 𝒰 𝒯 𝒮 ℛ 𝒬 𝒫 𝒪 𝒩 ℳ ℒ 𝒦 𝒥 ℐ ℋ 𝒢 ℱ ℰ 𝒟 𝒞 ℬ 𝒜 𝔩
http://arxiv.org/abs/1704.08238v2
{ "authors": [ "Nina Holden", "Yuval Peres", "Alex Zhai" ], "categories": [ "math.PR", "60A99" ], "primary_category": "math.PR", "published": "20170426174606", "title": "Gravitational allocation for uniform points on the sphere" }
[pages=1-last]arXivSubmission_Lund_Iyer_4_27_2017.pdf
http://arxiv.org/abs/1704.08275v1
{ "authors": [ "Steven P. Lund", "Hari K. Iyer" ], "categories": [ "stat.AP" ], "primary_category": "stat.AP", "published": "20170426181310", "title": "Likelihood Ratio as Weight of Forensic Evidence: A Closer Look" }
a]G. Saxena b]M. Kaushik[a]Department of Physics, Govt. Women Engineering College, Ajmer-305002, India [b]Department of Physics, Shankara Institute of Technology, Kukas, Jaipur-302028, IndiaWe have employed RMF+BCS (relativistic mean-field plus BCS) approach to study behaviour of pf shell with the help of ground state properties of even-even nuclei. Our present investigations include separation energies , deformations, single particle energies, wavefunction, potential as well etc density distribution. As per recent experiments showing neutron magicity at N = 32 for Ca isotopes, our results with mass dependent pairing indicate a shell closure at N = 32 in Ca isotopes and a more strong shell closure at N = 34 in proton deficient ^48Si because of reorganization of neutron pf shell. In a similar manner, proton pf shell structure is more likely to produce shell closure at Z = 34 with a doubly magic character for ^84,116Se. We have also included N = 40 isotones and Z = 40 isotopes for our study and predicted ^60Ca and ^68Ni as doubly magic nuclei out of which ^60Ca is found near drip-line of Ca and a potential candidate for future studies in the chain of Ca isotopes next to doubly magic ^52Ca.Neutron and proton magic nuclei; Relativistic mean-field plus BCS approach; pf shell nuclei; Doubly Magic Nuclei, Shell Closure. § INTRODUCTIONEvolution of shell structure is being a momentous topic in the world of theoretical and experimental nuclear physics since last two decades. There are several regions of periodic chart which have been established showing new magicity or showing disappearance of traditional magicity through different theoretical approaches and confirmed from many experiments. As an example, appearance of the new magic number N = 16 <cit.> has been corroborated and neutron drip-line nucleus ^24O with N = 16, is now considered as a doubly-closed-shell nucleus <cit.>. On the other hand disappearance of the conventional magic numbers N = 8, 20 and 28 have been demonstrated through many communications <cit.> - <cit.>.In recent past, experimental studies strongly indicate N = 32 as a new magic number in Ca isotopes due to the high energy of the first 2^+ state in this nucleus <cit.>. In addition, high precision mass measurements were performed for the neutron rich Ca isotopes ^53Ca and ^54Ca by employing the mass spectrometer of ISOLTRAP at CERN and confirmed the magicity of the nucleus ^52Ca <cit.>. Furthermore, first experimental spectroscopic study on lowlying states was performed also with proton knockout reactions at RIKEN indicating the magic nature of the nucleus ^54Ca <cit.>. More recently, in 2015, N = 32 shell closure is confirmed for exotic isotopes ^52,53K in Ref. <cit.>.These recent evidences for emerging sub-shell gaps at neutron number N = 32 and N = 34 have accumulated high interest for the description of pf-shell nuclei <cit.>. The next benchmark in this region of pf shell after N = 32 and N = 34 is N = 40 <cit.>, which is important in determining nuclear structure and likely also the location of the neutron drip-line in the Ca isotopic chain <cit.>.The theoretical studies of such nuclei can be in general grouped into three different approaches. These are (i) the ab initio methods, (ii) the macroscopic models with shell corrections and (iii) the self-consistent mean-field and shell model theories. As far as mean field models are concern there are mainly three nuclear mean-field models which are widely used in current calculations and are based on (i) the Skyrme zero range interaction initially employed by Vautherin and Veneroni <cit.>, and Vautherin and Brink <cit.>, (ii) the Gogny force <cit.> with finite range, and (iii) the relativistic mean-field model formulated by Walecka <cit.>, and Boguta and Bodmer <cit.>. During the last several years, theoretical studies of neutron and proton rich nuclei away from the valley of β-stability have been mostly accomplished under the framework of mean-field theories <cit.>, and also employing their relativistic counterparts <cit.>. Recently, the interest in tensor interaction for nuclear structure has also been revived by the study of the evolution of nuclear properties far from the stability line <cit.>. However, in the context of the effective theories that describe medium-heavy nuclei, the role of the tensor force is still debated <cit.>.In this paper, we will target pf shell nuclei with N(Z) = 32 and 34 along with the nuclei with N(Z) = 40 to study shell evolution for complete chain of isotones(isotopes) upto drip-lines (covering mass region from A = 48 to A = 122). When nuclei move away from stability line toward their drip-lines then corresponding Fermi surface moves closer to zero energy at the continuum threshold and therefore significant number of the available single-particle states form part of the continuum. Since pairing correlations are very important for drip-line nuclei therefore to include them together with a realistic mean-field, the HF+BCS and RMF+BCS calculations has turned out to be very useful and successful tool as has been demonstrated recently <cit.>. The principal advantage of the RMF approach is that it provides the spin-orbit interaction in the entire mass region in a natural way. Since the single particle properties near the threshold are prone to large changes as compared to the case of deeply bound levels in the nuclear potential therefore RMF approach has proved to be very crucial for the study of unstable nuclei upto the drip-lines. The results provided by RMF+BCS scheme <cit.> are indeed in close agreement with the experimental data and other similar mean-field calculations <cit.>. In our investigations of pf shell nuclei, we use RMF+BCS approach to calculate ground state properties viz. single particle energy, deformation, separation energy as well as pairing energy etc. The results are compared with recent experimental results and various popular force parameters viz. TMA <cit.>, NLSH <cit.>, NL3 <cit.>, PK1 <cit.> and NL3* <cit.> to bear witness of our results and outcomes. These above sets of parameters are for non-linear interaction and have been commonly used for variety of mass calculations for whole periodic region <cit.> with a very good agreement with available experimental data for whole mass table. Meanwhile, the new parameter sets belong to density dependent coupling are DD-ME1 <cit.>, DD-ME2 <cit.>, DD-MEδ <cit.> which have also provided a more realistic description of neutron matter and finite nuclei. However, due to few limitations of these density dependent coupling in describing several transitional medium-heavy nuclei <cit.> and to make a simplified and general treatment, we here prefer non-linear coupling to describe our results of pf shell nuclei which may posses transitional character <cit.>. The non linear interaction in RMF theory has proven to be very much effective, consistent and reliable <cit.> and has recently successfully applied to study deformed nuclei at proton drip line <cit.>, in the study of neutron star <cit.>, in describing properties of superheavy nuclei <cit.>and determining hadron-quark phase transition line in the QCD phase diagram <cit.>. Therefore, we strongly believe that conclusions drawn here using RMF+BCS calculations with non linear effective interaction will not be affected very much by other interactions or approaches.§ RELATIVISTIC MEAN-FIELD MODELThe calculations have been carried out using the following model Lagrangian density with nonlinear terms both for the σ and ω mesons as described in detail in Refs. <cit.>. L=ψ̅ [γ^μ∂_μ - M]ψ + 1/2 ∂_μσ∂^μσ - 1/2m_σ^2σ^2- 1/3g_2σ^3 - 1/4g_3σ^4 -g_σψ̅σψ -1/4H_μνH^μν + 1/2m_ω^2ω_μω^μ + 1/4 c_3 (ω_μω^μ)^2- g_ωψ̅γ^μψω_μ -1/4G_μν^aG^aμν + 1/2m_ρ ^2ρ_μ^aρ^aμ- g_ρψ̅γ_μτ^aψρ^μ a -1/4F_μνF^μν - eψ̅γ_μ(1-τ_3)/2 A^μψ,In the above Lagrangian the field tensors H, G and F for the vector fields are defined byH_μν = ∂_μω_ν -∂_νω_μ G_μν^a = ∂_μρ_ν^a -∂_νρ_μ^a-2 g_ρ ϵ^abcρ_μ^bρ_ν^cF_μν = ∂_μ A_ν -∂_ν A_μ and other symbols have their usual meaning. Now, one can apply the BCS approximation to the state-independent pairing force but high density of single-particle states in the particle continuum immediately results in an unrealistic increase of BCS pairing correlations <cit.>. To avoid such an increase, one may artificially readjust the pairing strength constant but then the spatial asymptotic properties of the solutions become incorrect and consequently due to a nonzero occupation probability of quasibound states, there appears an unphysical gas of neutrons surrounding the nucleus <cit.>. These deficiencies can be healed by applying state-dependent-pairing-gap version, where the pairing gap is calculated for every single-particle state. But it is found that even with state dependent BCS calculations a surplus density above asymptotic limit appears at large distances results incorrect behavior of the density <cit.>. Therefore, one may think that excluding the scattering states from the pairing phase space could be a decisive solution to the problem. Indeed, such scheme has been introduced by Sandulescu et al. <cit.> in which the effect of the resonant continuum on pairing correlations is introduced through the scattering wave functions located in the region of the resonant states. These states are found by solving the relativistic mean field equations with scattering-type boundary conditions for continuum spectrum and this scheme rBCS is found very effective for the description of drip line nuclei <cit.>.In this paper, for the chain of Ca and Ni isotopes, we have found resonant states 1g_9/2 and 1g_7/2 respectively but these states start to play an important role with their resonant part for N > 40 and N > 50 respectively. With this in view, these contributions of resonant states are less significant in terms of discussion of pf shells considered in this paper where we are concern upto N = 40 only. Moreover, we are mainly dealing with doubly magic nuclei in this paper and for doubly magic nuclei results of RMF+BCS and RMF+rBCS (including resonant part by Sandulescu et al. <cit.>) are found exactly same because for doubly magic nucleus pairing energy vanishes and resonant state does not contribute in the pairing energy. Therefore, in the context of this paper where we target only for pf shell nuclei (N or Z ≤ 40), it is reasonable to perform state dependent BCS calculations <cit.> based on single-particle spectrum in which continuum is replaced by a set of positive energy states generated by enclosing the nucleus in a spherical box. We have chosen box radius R = 30 fm which is same as has been taken in Ref <cit.> and results are not much affected by changing the box radius around R = 30 fm at least for the considered doubly magic nuclei in pf shell.Thus the gap equations have the standard form for all the single particle states, i.e.Δ_j_1=-1/21/√(2j_1+1)∑_j_2<(j_1^2) 0^+ |V| (j_2^2) 0^+>/√((ε_j_2 - λ)^2 + Δ_j_2^2)√(2j_2+1) Δ_j_2 where ε_j_2 are the single particle energies, and λ is the Fermi energy, whereas the particle number condition is given by ∑_j (2j+1) v^2_j= N. In the calculations we use for the pairing interaction a delta force, i.e., V = -V_0 δ(r). Initially, we use the same value of V_0 for our present study (V_0 = 350 MeV fm^3) which was determined in Ref. <cit.> by obtaining a best fit to thebinding energy of Ni isotopes. Since, we are covering region from A = 48 to A = 122 therefore we will examine our results using mass dependent (1/A-dependancy) pairing strength which may be an effective way to handle large mass region. Apart from its simplicity, the applicability and justification of using such a δ-function form of interaction has been discussed in Ref. <cit.>, whereby it has been shown in the context of HFB calculations that the use of a delta force in a finite space simulates the effect of finite range interaction in a phenomenological manner (see also <cit.> for more details). The pairing matrix element for the δ-function force is given by <(j_1^2) 0^+ |V| (j_2^2) 0^+> =- V_0/8π√((2j_1+1)(2j_2+1))I_RHere V_0 is the pairing strength which is taken as 350 MeV fm^3 for initial calculations and then it is used with mass dependency for further calculations as mentioned above. Mass dependent pairing strength used here is given byV_0 = 25,500/A MeV fm^3 In equation (3), I_R is the radial integral having the formI_R=∫ dr 1/r^2 (G^⋆_j_ 1G_j_2 + F^⋆_j_ 1F_j_2)^2Here G_α and F_α denote the radial wave functions for the upper and lower components, respectively, of the nucleon wave function expressed asψ_α=1r(i G_α 𝒴_j_α l_α m_αF_α σ·r̂𝒴_j_α l_α m_α),and satisfy the normalization condition ∫ dr {|G_α|^2 + |F_α|^2} = 1 In Eq. (6) the symbol 𝒴_jlm has been used for the standard spinor spherical harmonics with the phase i^l. The coupled field equations obtained from the Lagrangian density in (1) are finally reduced to a set of simple radial equations which are solved self consistently along with the equationsfor the state dependent pairing gap Δ_j and the total particle number N for a given nucleus. The relativistic mean field description has been extended for the deformed nuclei of axially symmetric shapes by Gambhir, Ring and their collaborators <cit.> using an expansion method. The scalar, vector, isovector and charge densities, as in the spherical case, are expressed in terms of the spinor π_i, its conjugate π_i^+, operator τ_3 etc. These densities serve as sources for the fields ϕ = σ, ω^0 ρ^0 and A^0, which are determined by the Klein-Gordon equation in cylindrical coordinates. Thus a set of coupled equations, namely the Dirac equation with potential terms for the nucleons and the Klein-Gordon type equations with sources for the mesons and the photon is obtained. These equations are solved self consistently. For this purpose, as described above, the well-tested basis expansion method has been employed <cit.>. For further details of these formulations we refer the reader to refs.  <cit.>.§ RESULTS AND DISCUSSIONMagic numbers between N(Z) = 20 to N(Z) = 50 are mainly governed by position and separation of single particle states in and from pf shell. For a detailed study in this region towards neutron and proton side, we first carry out RMF+BCS calculations including the deformation degree of freedom (referred to throughout as deformed RMF) to search spherical or nearly spherical isotones/isotopes. It is found that some isotones/isotopes actually exhibit very small or almost no deformation. Therefore, for such cases of negligible deformation, we take advantage to use RMF+BCS approach for spherical shapes (referred to throughout as RMF) for the analysis of results. This description in terms of spherical single particle wave functions and energy levels treats shell closures and magicity etc. with more convenience and transparency. Addition to this, with the spherical framework of RMF pairing gaps, total pairing energy, contributions of neutron and proton single particle states etc. can be demonstrated with more clarity.To study pf shell, we first turn on our discussion with emergence of new shell closures (N = 32 and 34) in exotic Ca isotopes which have been indicated by recent experiments <cit.>. First it was observed that due to high excitation energy of ^52Ca as compared to its neighbouring nuclei, N = 32 shell closure may develop <cit.>. After that from recent high-precision mass measurements of several isotopes ranging from ^51Ca up to ^54Ca a confirmation was obtained in 2013 <cit.>. In addition, from the more recent measurement of the 2_1+energy in ^54Ca, it is also found that ^54Ca could be magic as well <cit.>.To investigate the possibility for ^52Ca and ^54Ca, we have displayed two neutron shell gap [S_2n(N, Z) - S_2n(N+2, Z)] in Fig. 1 which is determined by calculations of RMF with different sets of parameters i.e. TMA <cit.>, NLSH <cit.>, NL3 <cit.>, PK1 <cit.> and NL3* <cit.>. Ca isotopes have been examined with TMA and NL-SH parameters in Ref <cit.> in detail where constant pairing strength V_0 = 350 MeV fm^3 has been used. This constant pairing strength V_0 = 350 MeV fm^3 has been also used in the description of other magic nuclei <cit.>. We have used the same pairing strength for other parameters also and the results are shown together with available experimental data <cit.> and the available data of calculations recently done by Skyrme parameter set with tensor interaction (SLy5) <cit.> for comparison.It is reflected from the Fig. 1 that calculations done in Ref <cit.> with TMA & NLSH parameter and the calculations with other parameters retrace well with the experimental data throughout for all isotopes of Ca, however for ^52Ca and ^54Ca these results are not reproducing experimental peaks in a good agreement. As can be seen from Fig. 1, for Ca isotopes the results of TMA & NLSH parameters <cit.> along with NL3 & PK1 parameter are far enough with the experimental results. However, NL3* is showing better results but even these results for ^52Ca and ^54Ca are not in a good agreement with experimental data. As mentioned above, we apply mass dependent pairing also for our calculations and we use NL3* parameter which is found with better agreement comparative to other parameters. It is indeed gratifying to note that mass dependent pairing together with NL3* parameter considerably improves result of ^52Ca as can be seen from Fig. 1. These results for complete chain of Ca isotopes are mentioned with RMF* in the diagram shown by magenta square.Obtaining better peak in Fig. 1 for ^52Ca with RMF*(NL3*), one can also observed sudden peaks for known other doubly magic nuclei viz. ^40Ca and ^48Ca.In addition to this, from Fig. 1, no indication is found for magic character of ^54Ca. Moving further, there is again a large peak for ^60Ca much stronger than ^52Ca and similar to the peaks obtained in ^40Ca and for ^48Ca giving indication of more strong doubly magic nucleus ^60Ca. With the recent experiments on Ca isotopes <cit.> next doubly magic nucleus ^60Ca is predicted here for future experiments which may be an interesting and remarkable finding due to its doubly magicity and its position near neutron drip-line of Ca. The above results for ^52,54,60Ca are further supported by pairing energy contribution from protons and neutrons, separation energies and wavefunction characteristics.From the above discussion it would be important to investigate neutron shell closures N = 32, 34 and 40 in detail for full isotonic chains upto their driplines. In addition from the isospin symmetry considerations <cit.>, it would be also useful to study proton shell closures Z = 32, 34 and 40 for full isotopic chains upto their driplines.Therefore, in the following we will examine these isotonic/isotopic chain in detail using RMF approach with mass dependent pairing using NL3* parameter which has produced better outcome over constant pairing strength and various other parameters as shown above in the case of ^52Ca. §.§ N = 32 and 34It is very interesting example and situation for various theoretical and experimental studies to study N = 32 and N = 34 shell closure simultaneously, therefore in the following discussion we will examine N = 32 and N = 34 within the framework of RMF theory using nuclei ^52Ca and ^54Ca.To show the magicity of N = 32 or N = 34, we choose some spherical or nearly spherical nuclei in the isotopic chain of Si, Ca and Ni. It has been shown through various studies <cit.> and we have also found using deformed RMF approach <cit.> that expect a few cases all the nuclei in Ca and Ni isotopic chain are spherical or nearly spherical in nature. Conversely, many of the Si isotopes are found strongly deformed through various experimental and theoretical studies <cit.>. However, ^22,34,48Si are found to have spherical shapes which has been shown already using deformed RMF approach <cit.>. From deformed RMF approach it is found that ^42Si exhibits shape coexistence with an oblate ground state being the deformation parameter β_2 = - 0.37 <cit.>. Moving towards neutron rich side for ^44,46,48Si the second minimum becomes less pronounced gradually and for drip-line isotope ^48Si only one minima contributes indicating completely spherical ground state configuration. This comment, allow us to choose ^46,48Si as nearly spherical/spherical nuclei for our study. In the following discussion we mainly select ^46Si, ^52Ca and ^60Ni to investigate shell closure at N = 32, and ^48Si, ^54Ca and ^62Ni for shell closure at N = 34 using spherical RMF approach.With these above remarks, in Fig. 2, we display our results of neutron single particle energies for isotonic chain of N = 28, 30, 32 and 34. These results are obtained using NL3* parameter <cit.> with considering mass dependency on pairing strength. We have also mentioned experimental values of single particle energies of known doubly magic nucleus <cit.>. These values are marked with filled star in left lower panel and it is grateful to note that our calculated single particle energies using NL3* parameter and mass dependent pairing are indeed very close to the experimental values specially for ^48Ca which earmark us to be interpretative for the study of shell closures on the basis of single particle energy. The gap after neutron 1f_7/2 state is well reflected from the figure in ^48Ca and ^56Ni as shown in lower left panel of Fig. 2. However this shell gap gets weakened in ^42Si suggesting a disappearance of N = 28 shell closure similar to refs. <cit.>. This weakening can be caused and inferred by inclusion of deformations which is a matter of separate discussion in the main context of this paper. Now moving towards more neutron rich case for N = 32 (right upper panel), the next 4 neutrons (28 + 4 = 32 Neutrons) are filled in the states 2p_3/2 and 2p_1/2 located above the 1f_7/2 state. From right upper panel in Fig. 2, it is evident that 2p_3/2 and 2p_1/2 states are very close to each other in ^46Si, and are occupied by these four neutrons simultaneously. However, moving towards higher Z (from Z = 14 to Z = 20) i.e. in ^52Ca, these two states 2p_3/2 and 2p_1/2 get separated as shown in the Fig. 2 for N = 32 isotones. This separation results a shell gap at N = 32 in ^52Ca which is similar to recent results <cit.>. It is also observed from the Fig. 2 that the state having higher angular momentum (neutron 1f_5/2) moves deeper on increasing proton number (Si to Ni). For Si isotopes, 1f_5/2 state is far separated from closely situated 2p_3/2 and 2p_1/2 states. Whereas for Ca isotopes due to more protons this 1f_5/2 state is quite close to 2p_1/2 state resulting a gap after 2p_3/2 state which may even lead to N = 32 shell closure. However, this shell gap is not so strong as it has been found for any conventional magic number such as gap after 1f_7/2 in ^48Ca as shown in left lower panel. Moving towards more proton rich side in Ni isotopes, this state 1f_5/2 moves even below to 2p_3/2 and 2p_1/2 states and therefore this filling in of 1f_5/2 state before 2p_3/2 and 2p_1/2 states declines the possibility of N = 32 shell gap in ^60Ni or in further proton rich nuclei.From the Fig. 2, it is also apparent that for N = 34 isotones (right lower panel) for lower Z the state 1f_5/2 lies significantly above from the states 2p_3/2 and 2p_1/2 as shown in the case of ^48Si and gives rise to new shell closure at N = 34 for lower Z nuclei like Si. It can also be anticipated that due to large gap as reflected from Fig. 2, N = 34 shell closure behaves more strongly than that of N = 32 shell closure. Consequently, ^48Si may be considered as a more strong spherical doubly magic candidate for the future experiments in the same mass region as that of ^52Ca.To check the dependency of parameters on the above results of N = 32 and N = 34 shell closures, it would be important and interesting to go through the calculations with different sets of parameters. With this in view, in the Table 1, we have tabulated results using few more parameters for comparison. We have performed the RMF calculations with mass dependent pairing strength using TMA <cit.>, NLSH <cit.>, NL3 <cit.>, PK1 <cit.> and NL3* <cit.> force parameters. The energy gaps between 2p_3/2 and 2p_1/2 states for N = 32 and between 2p_1/2 and 1f_5/2 states forN = 34 are tabulated for different parameters sets in Table 1.From Table 1, it is clear that for ^52Ca, the similar shell gap between 2p_3/2 and 2p_1/2 states also lies for other parameters. Using mass dependent pairing with TMA & NLSH parameter (see column 3 and 4) this gap is rather small [1.48 MeV for TMA and 1.59 MeV for NLSH] comparative to other parameters (column 5, 6, 7), but it is found larger than the gap calculated using constant pairing strength in Ref <cit.>. With the calculations in Ref <cit.>, this gap is found 1.37 MeV and 1.48 MeV with TMA and NL-SH parameter respectively. Therefore from here it may be concluded with the comparison that calculation considered here with mass dependent pairing provides a better treatment over constant pairing calculation as done in Ref <cit.> specially for new shell closure in Ca isotopes. In a similar manner, the gap between 2p_1/2 and 1f_5/2 states is found significant by other parameters resulting a more strong shell closure at N = 34 for ^48Si. It is important to note here from the Table 1 that present results with NL3* parameter are more supportive and certificatory for other sets of parameters.To get further support for above results of shell closure at ^52Ca we have displayed in Fig. 3, occupancy (number of neutrons) of neutron single particle states 1f_7/2, 2p_3/2, 2p_1/2 and 1f_5/2 (pf shell) for the nuclei ^46Si, ^52Ca, ^60Ni from N = 32 isotonic chain using TMA, NL-SH, NL3, PK1 and NL3* parameters. Very first observation from Fig. 3 is that our results from all parameters are similar. It can be also observed that for all parameters, occupancy of state 1f_7/2 is unchanged for all considered nuclei. However, occupancy for other neutron states 2p_3/2, 2p_1/2 and 1f_5/2 change significantly. From the Fig. 3, it is evident that occupancy of 1f_5/2 state for ^46Si is zero as 1f_5/2 lies around 3 MeV above than 2p_3/2 and 2p_1/2 states as mentioned in Table 1. In ^46Si these two states 2p_3/2 and 2p_1/2 are occupied together but moving towards ^52Ca, occupancy of 2p_1/2 state decreases and becomes zero. This is rather very essential for development of N = 32 shell closure along with 2p_3/2 should attain its maximum occupancy. Occupancyof 2p_1/2 becomes exactly zero and occupancy of 2p_3/2 state increases upto its maximum and it accommodates all four neutrons with 2p_1/2 and 1f_5/2 vacant. The gap therefore between 2p_3/2 and 2p_1/2 leads to shell closure characteristic at N = 32. As said above, all the parameters are favourable and clearly showing magicity at N = 32 as can be seen from Fig. 3. Moving further towards ^60Ni state 1f_5/2 goes much deeper due to which it accommodates more particles than 2p_3/2 and hence destroys the shell closure which has developed for ^52Ca. The calculations from TMA, NL-SH, NL3, PK1 and NL3* provide same sort of results and behaviour of these states. Next in Fig. 4, we have shown occupancy (number of neutrons) of neutron single particle pf states for the nuclei ^48Si, ^54Ca, ^62Ni from N = 34 isotonic chains. From Fig. 4, it can be observed that 2p_3/2 and 2p_1/2 states are fully occupied for ^48Si and 1f_5/2 is having zero occupancy. This result is same for all parameters and hence supporting magic character of N = 34 in ^48Si. It is worth here to point out that moving from ^48Si to ^54Ca occupancy of 2p_3/2 and 2p_1/2 states decreases whereas occupancy of 1f_5/2 state increases upto ^62Ni. Because of this filling in of 1f_5/2 state together with 2p_3/2 and 2p_1/2 states in ^54Ca there is no possibility of N = 34 shell closure in ^54Ca or in higher nuclei. However, this kind of uniform filling of 1f_5/2 state is not observed in ^52Ca as illustrated above and in Fig. 3. Therefore, it can be concluded here that ^52Ca and ^48Si are found to possess magic character and there are no sign found for shell closure at N = 34 in ^54Ca.§.§ Z = 32 and 34After the discussion on neutron shell closure at N = 32 and 34, it is expected from the isospin symmetry considerations <cit.> that towards proton side Z = 32 and Z = 34 should also exhibit similar shell closures. With this in view, in a similar manner, we have performed our deformed RMF+BCS calculations for isotopic chain of Z = 32 and Z = 34. Most of the nuclei in the isotopic chains are found deformed but it is also found that there are several nuclei which are actually spherical or nearly spherical. These are ^60,82,114Ge for Z = 32 and ^62,84,116Se for Z = 34 corresponding to N = 28, 50 and 82. It is important to note that these nuclei cover complete isotopic chain from proton rich side to neutron rich side as one moves from N = 28 to N = 82. It is worth to mention here that N = 82 is found to be neutron drip-line for Ge (Z=32) and Se (Z=34) isotopic chains.For these nuclei, we have performed spherical RMF+BCS calculations using mass dependent pairing with NL3* parameter and it is gratifying to note that Z = 34 indeed shows shell closure characteristic towards neutron rich side. To elaborate this, in Fig. 5, we display our results of proton single particle energy for all above mentioned nuclei. From Fig. 5, it is observed that proton 1f_5/2 state lies in between 2p_3/2 and 2p_1/2 states initially for proton rich side at ^62Se. Moving towards neutron rich side 1f_5/2 state goes down to these 2p_3/2 and 2p_1/2 states, consequently, creates a significant gap (3.1 MeV for N = 50 and 4.1 MeV for N = 82) and gives rise to strong shell closure at Z = 34 for the nuclei with N = 50 and N = 82. Therefore, our calculations predict ^84,116Se as doubly magic nuclei giving rise to shell closure at Z = 34 similar to N = 34. Moreover, pairing energy contribution from proton and neutron is also found zero which supports the magic character of both these nuclei ^84,116Se.Comparatively, it can be concluded that towards neutron side (N = 32) empty 1f_5/2 state lies above on 2p_3/2 and 2p_1/2 for ^52Ca gives rise to N = 32 shell closure but toward proton side for Z = 32 isotopes 1f_5/2 state always lies below to 2p_3/2 and 2p_1/2 states and thus proton filling in 1f_5/2 state just after 1f_7/2 state (28+6 = 34) gives no probability of Z = 32 shell closure. It should be also mentioned here that ^116Se and ^48Si both are drip-line nuclei for Z = 34 and Z = 14 respectively and these new shell closures at Z = 34 and N = 34 are due to reorganization of single particle levels in the vicinity of drip-lines. Again, these results have been tested for different box radius and consistency is found in the above said conclusions. §.§N = 40 and Z = 40 The gap created by pf shell with 1g_9/2 state is responsible for the shell closure at N = 40 or Z = 40. Therefore it would be again interesting to investigate behaviour of pf shell, consequently respective positions of single particle states of pf shell and 1g_9/2 state for N(Z) = 40. To visualize this shell closure we have performed calculations for all isotones of N = 40 and isotopes of Z = 40 with deformed RMF+BCS approach as done in the previous subsections and found that N = 40 is spherical for neutron rich side ranging from Z = 16 to Z = 30 having zero quadrupole deformation parameter (β_2 = 0), whereas Z = 40 (Zr) is spherical for only N = 50 and N = 82. For these nuclei once again we have applied spherical RMF+BCS approach using mass dependent pairing with NL3* parameter and some of the results are plotted in Fig. 6. We have shown proton single particle energy of various states for N = 40 isotones with Z = 16 - 32. From the Fig. 6, it is evident that N = 40 shell closure is due to gap between neutron pf shell and neutron 1g_9/2 state. But the variation in the positions of states of pf shell, this gap changes as one moves from Z = 16 to Z = 32. For Z = 16 - 20 neutron 1f_5/2 state lies above to the 2p_3/2 and 2p_1/2 and variation is such that gap between 1g_9/2 and 1f_5/2 states increases from Z = 16 to Z = 20. At Z = 20 for ^60Ca 1f_5/2 produces the gap with 1g_9/2 at its maximum value 5.4 MeV as can be seen from Fig. 6. Moving towards higher Z (Z>20), 1f_5/2 state gets deeper because of its higher angular momentum as compared to 2p_3/2 and 2p_1/2 states. Moving beyond Z = 20, this gap responsible for N = 40 now arises between 2p_1/2 and 1g_9/2, and for Z = 28 it is found 4.3 MeV which results ^68Ni as a candidate of doubly magicity. This doubly magic candidate ^68Ni has been identified by Broda et al. <cit.> and recently byBentley et al. <cit.>.We have also analyzed the behaviour of pf shell in Z = 40 (Zr) isotopes. In a similar manner we found only two isotopes which are spherical i.e. ^90Zr and ^122Zr as mentioned above. For these isotopes, it is seen that there is no reorganization of proton pf shell (1f_7/2, 1f_5/2, 2p_3/2 and 2p_1/2 states) and proton 1g_9/2 state. These states follow the same positions while moving from ^90Zr to ^122Zr as can be seen in the inset of Fig. 6. It is found that the energy gap between pf shell and 1g_9/2 state for ^90Zr and ^122Zr is around 1 MeV which is not so pronounced. Therefore for Zr isotopes, Z = 40 is not found to be a proton shell closure dissimilar to above discussed N = 40 neutron shell closure.Important outcome of the above study is the appearance of N = 40 shell closure for Z = 16 to Z = 30. Among these nuclei, ^60Ca and ^68Ni are much pronounced shell closures as mentioned above (gap is 5.4 MeV and 4.3 MeV respectively as mentioned also in Fig. 6). As already stated that recently magicity of the nucleus ^52Ca is confirmed by employing the mass spectrometer of ISOLTRAP at CERN <cit.>. So far mass of ^58Ca is known <cit.> and therefore one may expect the determination of mass in ^60Ca in near future.§.§ Ground state properties of predicted magic nuclei in pf shell With the above detailed investigations, we predict shell closure at ^52Ca and strong shell closures at ^60Ca, ^48Si, ^68Ni along with ^84,116Se in pf shell. For all of these nuclei the calculations are carried out within the framework of deformed RMF approach (axially deformed configuration) and these calculations have yielded valuable results related to the ground state properties, such as binding energy, rms proton and neutron radii, two proton and two neutron separation energies, deformations etc. It will be interesting to get more insight into the shapes of these nuclei. In Fig. 7, we have plotted binding energy curves as function of quadrupole deformation parameter as obtained with constrained RMF calculations including the deformation degree of freedom for all above nuclei. From Fig. 7, it is valuable to note that all these nuclei show a spherical ground state configuration which validate all the above results interpretedby using spherical RMF approach. A sharp minima is readily seen in ^60Ca comparable to ^52Ca suggesting a more strong spherical character in ^60Ca. Similarly very sharp minima are also observed for ^68Ni, ^84Se and ^116Se. In Table 2, we present here some important ground state properties of these nuclei calculated by axially deformed RMF approach using NL3* parameter.Moreover, guided from the predicted shell closures and for systematic comparison with experimental data we have also calculated ground state properties of chains of isotopes of Si, Ca, Ni and Se isotopes using axially deformed RMF approach with NL3* parameter. In Fig. 8, we have plottedtwo neutron separation energy S_2n for chains of isotopes of Si, Ca, Ni and Se nuclei. To demonstrate the validity of RMF calculations, we have also compared our results of two neutron separation energy with one of the popular nonrelativistic approach viz. Skyrme-Hartree-Fock method with the HFB-17 functional given by Goriely et al. <cit.>. From Fig. 8 it is evident that results of RMF calculations are in fairly good agreement with available experimental data and the results of Skyrme-Hartree-Fock method for all three chains of isotopes. For Si isotopes, the two neutron separation energy becomes negative after N = 34 with RMF calculations and concludes that two neutron drip line for Si isotopes is N = 34. Interestingly this nucleus of Si with N = 34 (^48Si) is one of our predicted doubly magic nucleus as mentioned earlier and leads to an example of drip line doubly magic nucleus with new neutron magic number N = 34. However, with HFB calculations <cit.> this nucleus ^48Si is little unbound with 0.46 MeV resulting two neutron drip line at N = 32 for Si isotopes. In a similar way drip lines for Ca, Ni and Se isotopes with RMF calculations are found at N = 46, N = 70 and at N = 82 respectively. Therefore, our predicted doubly magic nucleus ^60Ca is near to drip line of Ca and once again another predicted doubly magic nucleus ^116Se represents an interesting example of drip line doubly magic nucleus with new proton magic number Z = 34. From HFB calculations the drip lines are found at little variance N = 48, N = 64 and N = 80 for Ca, Ni and Se isotopes respectively. A sharp decrease in the value of S_2n just after magic number can be seen from the Fig. 8. This sharp fall lies after ^48Si, ^52Ca, ^60Ca, ^68Ni, ^84Se and ^116Se affirming our conclusions. Other properties of these isotopic chains and similar systematic calculations for chains of isotones of N = 32, 34 and 40 (not shown here) are found with excellent agreement with available experimental data <cit.> along with HFB calculations <cit.> and over again fortify our predictions.§ SUMMARYIn the present investigation we have employed the relativistic mean-field plus BCS (RMF+BCS) approach <cit.> to study shell closures N(Z) = 32, 34 and 40 in neutron and proton pf shell extensively. For our study, the RMF calculations for all the nuclei considered here have been firstly carried out assuming a deformed shape (deformed RMF) and then the situation where nuclei are ascertained exhibiting very small or almost no deformation has been beneficially utilized by employing spherical RMF approach. For our calculation, mass dependent pairing is found more suitable and effective over constant pairing to cover a large mass region. The main body of the results of ourcalculations includes single particle spectra, two proton and two neutron separation energies and other ground state properties for complete isotonic and isotopic chain. For the description we have applied mainly NL3* <cit.> parameter and used various other popular force parameters viz. TMA <cit.>, NL-SH <cit.>, NL3 <cit.> and PK1 <cit.> to testify our results and outcomes.One of the prime reason of this study is to investigate neutron and proton single particle states of pf shell and consequently appearance of new shell closures. The main force behind this study is recent experimental observation showing new magicity at N = 32 and 34 <cit.>. We have found through single particle energies and separation energies that N = 32 behaves like a shell closure in ^52Ca which is in accord of recent observation, whereas N = 34 is not showing shell closure in ^54Ca. But, N = 34 is found to possess more strong shell closure in proton deficient nucleus ^48Si which is an example of doubly magic drip-line nucleus and potential candidate for future experiments in the same mass region as that of ^52Ca. Towards proton side Z = 32 is not found to exhibit magic character but Z = 34 becomes closed and resulting doubly magic nuclei ^84,116Se out of which ^116Se is again an example of doubly magic drip-line nucleus. In addition to this, we have also focused on shell closure N(Z) = 40 and came up that N = 40 shows shell closure towards proton deficient side resulting doubly magicity in ^60Ca and ^68Ni. Out of these nuclei the gap between neutron pf shell and next neutron 1g_9/2 state responsible of N = 40 shell closure, is maximum for ^60Ca. Our further study of two neutron shell gap for Ca isotopes also confirms that ^60Ca is possibly another important doubly magic nucleus near drip-line next to ^52Ca for future experiments.§ AKNOWLEDGEMENTSAuthors would like to thank Prof. H. L. Yadav, Banaras Hindu University, Varanasi, INDIA for his kind guidance and continuous support. The authors are indebted to Prof. L. S. Geng, Beihang University, China for valuable correspondence. One of the authors (G. Saxena) gratefully acknowledges the support provided by Science and Engineering Research Board (DST), Govt. of India under the young scientist project YSS/2015/000952.model1a-num-names 000 ozawaA. Ozawa, T. Kobayashi, T. Suzuki, K. Yoshida, and I. Tanihata, Phys. Rev. Lett. 84 (2000) 5493.kanungo R. Kanungo et al., Phys. Rev. Lett. 102 (2009) 152501.hoffman C. R. Hoffman et al., Phys. Lett. B 672 (2009) 17.tshoo K. Tshoo, et al., Phys. Rev. Lett. 109 (2012) 022501.iwasaki H. Iwasaki et al., Phys. Lett. B 481 (2000) 7.bastin B. Bastin et al., Phys. Rev. Lett. 99 (2007) 022503.watanabeS. Watanabe et al., Phys. Rev. C 89 (2014) 044610.jurado B. Jurado et al., Phys. Lett. B 649 (2007) 43.dimitar D. Tarpanov, H. Liang, N. Van Giai and C. Stoyanov, Phys. Rev. C 77 (2008) 054316.takeuchi S. Takeuchi et al., Phys. Rev. Lett. 109 (2012) 182501.otsuka T. Otsuka and A. Schwenk, Nucl. Phys. News 22(4) (2012) 12.gade A. Gade, et al., Phys. Rev. C 74 (2006) 021302(R).wienholtz F. Wienholtz, et al., Nature 498 (2013) 346.stepp D. Steppenbeck, et al., Nature 502 (2013) 207.rosenbusch M. Rosenbusch et al., Phys. Rev. Lett. 114 (2015) 202501.gallant A. T. Gallant et al., Phys. Rev. Lett. 109 (2012) 032506.jia J. J. Li, J. Margueron, W. H. Long and N. Van Giai, Physics Letters B, 753 (2016) 97.grasso-ca M. Grasso, Phys. Rev. C 89 (2014) 034316.wang1 Z. H. Wang, J. Xiang, W. H. Long and Z. P. Li, Journal of Phys. G. 42 (2015) 045108.meng J. Meng et al., Phys. Rev. C 65 (2002) 041302(R).vautherin1 D. Vautherin and M. Veneroni,Phys. Lett. B 29 (1969) 203. vautherin D. Vautherin and D. M. Brink,Phys. Rev. C 5 (1972) 626.gognyJ. Dechargé and D. Gogny, Phys. Rev. C 21 (1980) 1568.walecka B. D. Serot and J. D. Walecka, Adv. Nucl. Phys. 16 (1986) 1.boguta J. Boguta, A. R. Bodmer, Nucl. Phys. A 292 (1977) 413.terasakiJ. Terasaki, P. H. Heenen, H. Flocard and P. Bonche, Nucl. Phys. A 600 (1996) 371.dobac00 K. Bennaceur, J. Dobaczewski and M. Ploszajczak, Phys. Lett. B 496 (2000) 154.grassoM. Grasso, N. Sandulescu, Nguyen Van Giai and R. J. Liotta, Phys. Rev. C 64 (2001)064321.sand2N. Sandulescu, Nguyen Van Giai and R. J. Liotta, Phys. Rev. C 61 (2000) 061301(R).bouyA. Bouyssy, J.-F. Mathiot, Nguyen Van Giai, and S. Marcos, Phys. Rev. C 36 (1987) 380.pgr2P-G Reinhard, Rep. Prog. Phys. 52 (1989) 439. gambhirP. Ring, Y. K. Gambhir and G. A. Lalazissis, Comput. Phys. Commun. 105 (1997) 77 and reference therein.suga Y. Sugahara and H. Toki, Nucl. Phys. A 579 (1994) 557.ringP. Ring,Prog. Part. Nucl. Phys. 37 (1996) 193.sharmaM. M. Sharma, M. A. Nagarajan and P. Ring, Phys. Lett. B 312 (1993) 377.meng4J. Meng, Nucl. Phys. 635 (1998) 3.lalaG. A. Lalazissis, D. Vretenar and P. Ring, Phys. Rev. C 57 (1998) 2294.mizuS. Mizutori, J. Dobaczewski, G. A. Lalazissis, W. Nazarewicz and P. G. Reinhard, Phys. Rev. C 61 (2000) 044326.estalM. Del Estal, M. Contelles, X. Vinas and S. K. Patra, Phys. Rev. C 63 (2001) 044321.yadavH. L. Yadav, M. Kaushik and H. Toki, Int. Jour. Modern Physics E 13 (2004) 647.yadav1H. L. Yadav, S. Sugimoto and H. Toki, Mod. Phys. Lett. A 17 (2002) 2523.meng2 J. Meng, H. Toki, J.Y. Zeng, S. Q. Zhang and S. Q. Zhou, Phys. Rev. C 65 (2002) 041302(R).utsuno Y. Utsuno et al., Phys. Rev. C 86 (2012) 051301(R).grasso2 M. Grasso and M. Anguiano, Phys. Rev. C 92 (2015) 054316.sagawa H. Sagawa and G. Colo, Progress in Particle and Nuclear Physics 76 (2014) 1.saxena G. Saxena, D. Singh, M. Kaushik, and S. Somorendro Singh, Canadian Journal of Physics 92 (2014) 253.saxena1G. Saxena, D. Singh, M. Kaushik, H. L. Yadav and H. Toki, Int. Jour. Mod. Phys. E 22 (2013) 1350025.singh D. Singh, G. Saxena, M. Kaushik, H. L. Yadav and H. Toki, Int. Jour. Mod. Phys. E 21 (2012) 1250076.suga-tma L. S. Geng, H. Toki, A. Ozawa and J. Meng, Nucl. Phys. A 730 (2004) 80.ring3-nlshG. A. Lalazissis, D. Vretenar and P. Ring, Phys. Rev. C 63 (2001) 034305.lalanl3 G.A. Lalazisssis, Physical Review C 55 (1997) 540.pk1 W. Long, J. Meng, N. Van Giai and Shan-Gui Zhou, Physical Review C 69 (2004) 034319.nl3star G. A. Lalazissis, S. Karatzikos, R. Fossion, D. Pena Arteaga, A. V. Afanasjev, and P. Ring, Phys. Lett. B 671 (2009) 36.geng1L. S. Geng, H. Toki, S. Sugimoto and J. Meng, Prog. Theor.Phys. 110 (2003) 921.nik T. Niksic, D. Vretenar, P. Finelli, and P. Ring, Phys. Rev. C 66 (2002) 024306.lalaDD G. A. Lalazissis, T. Niksic, D. Vretenar, and P. Ring, Phys. Rev. C 71 (2005) 024312.roca X. Roca-Maza, X. Vinas, M. Centelles, P. Ring, and P. Schuck, Phys. Rev. C 84 (2011) 54309.ring-new P.Ring, Journal of Physics: Conference Series 49 (2006) 93.heyde K. Heyde and J. L. Wood, Rev. Mod. Phys. 83 (2011) 1467.otsuka-new T. Otsuka and Y. Tsunoda, J. Phys. G: Nucl. Part. Phys. 43 (2016) 024009.lidia L. S. Ferreira, E. Maglione and P. Ring, EPJ Web of Conf. 117 (2016) 06004.xing Xing Hu Li et al., Int. J. Mod. Phys. D 25 (2016) 1650002.mehta S. Mehta, H. Kaur, B. Kumar, and S. K. Patra Phys. Rev. C 92 (2015) 054305.junpei J. Sugano, H. Kouno, M. Yahiro, Phys. Rev. D 94 (2016) 014024.nazar W. Nazarewicz, T.R. Werner, and J. Dobaczewski, Phys. Rev. C 50 (1994) 2860.dobaczewski J. Dobaczewski, W. Nazarewicz, T. R. Werner, J. F. Berger, C. R. Chinn, and J. Decharge, Phys. Rev. C 53 (1996) 2809.sandulescu N. Sandulescu, L.S. Geng, H. Toki and G. Hillhouse, Phys. Rev. C 68 (2003) 054323.laneLane A. M., "Nuclear Theory", Benjamin, (1964).ring2Ring P. and Schuck P., "The Nuclear many-body Problem", Springer (1980).dobac-deltaJ. Dobaczewski, H. Flocard and J. Treiner, Nucl. Phys. A 422 (1984) 103.bertsch91G. F. Bertsch andH. Esbensen, Ann. Phys. (N.Y.), 209 (1991) 327.audi G. Audi et al., Chinese Physics C 36 (2012) 1287. http://www.nndc.bnl.gov/pris J. I. Prisciandaro et al., Phys. Lett. B 510(2001) 17.dinca D. C. Dinca et al., Phys. Rev. C 71 (2005) 041302.wang2 W. Ning and L. Min, Chinese Science Bulletin 60 (2015) 1145.grawe H. Grawe and M. Lewitowicz , Nucl. Phys.A 693 (2001) 116.lalazissis G.A. Lalazissis, D. Vretenar, P. Ring, M. Stoitsov, and L.M. Robledo, Phys. Rev. C 60 (1999) 014310.trache L. Trache et al. Phys. Rev., C 54 (1996) 2361.broda R. Broda et al., Phys. Rev. Lett. 74 (1995) 868.bentley I. Bentley, Y. Coln Rodrguez, S. Cunningham, and A. Aprahamian, Phys. Rev. C 93 (2016) 044337.wang M. Wang et al., Chin. Pys. C 36 (2012) 1603.goriely1 S. Goriely et al., Phys. Rev. Lett. 102 (2009) 152503, http://www-astro.ulb.ac.be/bruslib/.
http://arxiv.org/abs/1704.08421v1
{ "authors": [ "G. Saxena", "M. Kaushik" ], "categories": [ "nucl-th" ], "primary_category": "nucl-th", "published": "20170427033444", "title": "Behaviour of pf shell under RMF+BCS Description" }
#1#1 #1 #1 #1 ∫ #1#2#30=#1#2#3∫#2#3 -.560 = -=.25in =.25in =0in=0.25in=0in 6in 9in =3mm
http://arxiv.org/abs/1704.08674v2
{ "authors": [ "Spyridon Talaganis" ], "categories": [ "hep-th", "gr-qc" ], "primary_category": "hep-th", "published": "20170427173800", "title": "Towards UV Finiteness of Infinite Derivative Theories of Gravity and Field Theories" }
Departamento de Física, Facultad de Ciencias Físicas y Matemáticas, Universidad de Chile, Avenida Blanco Encalada 2008, Santiago, ChileA lattice model for active matter is studied numerically, showing that itdisplays wettings transitions between three distinctive phases when in contact with an impenetrable wall.The particles in the model move persistently,tumbling with a small rate α, and interact via exclusion volume only. When increasing the tumbling rates α, the system transits from total wetting to partial wetting and unwetting phases. In the first phase, a wetting film covers the wall, with increasing heights when α is reduced. The second phase is characterized by wetting droplets on the wall with a periodic spacing between them. Finally, the wall dries with few particles in contact with it. These phases present nonequilibrium transitions. The first transition, from partial to total wetting, is continuous and the fraction of dry sites vanishes continuously when decreasingthe tumbling rate α. For the second transition, from partial wetting to dry, the mean droplet distance diverges logarithmically when approaching the critical tumbling rate, with saturation due to finite-size effects.87.10.Mn,05.50.+q,87.17.Jj Wetting Transitions Displayed by Persistent Active Particles Rodrigo Soto December 30, 2023 ============================================================ Introduction. Active matter, composed of elementsable to transform stored energy in a reservoir into motion at the individual scale, has gained attention as a model for nonequilibrium systems where the driving is internal. It is a conceptual framework that aims to describe the collective and individual dynamics of living matter (microtubules, bacteria and other microorganisms, fish, birds, or even larger animals) as well as nonliving objects (active colloids, vibrated grains, etc.). One of the properties that characterize active matter is that particles present self-propulsion and persistence; this gives rise to several phenomena, such as phase separation, large diffusivities, giant density fluctuations, etc (see Refs. <cit.> for recent reviews and references therein). Different models have been introduced to account for persistence, where active Brownian and run-and-tumble particles are the most used for systems lacking of inertia, both in and off lattice <cit.>. Depending on the model, persistence is characterized by either the correlation length, the rotational diffusivity, or the tumbling rate.Self-propelled particles display important interactions with solid surfaces, which normally manifest in particle accumulation at the walls. At low Reynolds numbers, there is a hydrodynamic attraction for pusher swimmers, which leads toaccumulation <cit.>. Also, the geometrical alignment with surfaces that results fromcollisions implies that particles remain in their vicinity for long times <cit.>. Finally, concave walls, by their shape, generate longer residence times near surfaces <cit.>, an effect that is also present for E.coli at convex walls provided that the curvature radius is large enough <cit.>. In all these cases, the accumulation at surfaces is a result of individual interactions.Collective effects have also been considered inRefs. <cit.>. In this Letter, we are interested in the collective interaction of self-propelled particles with solid surfaces. In particular, we are interested in the development of wetting transitions.Molecular systems at equilibrium, when interacting with solid surfaces, can present several wetting states, depending on the temperature and surface tension <cit.>. In particular, a transition between phases of partial wetting, with finite contact angle, and total wetting, where the contact angle vanishes strictly, takes place close to the bulk critical point <cit.>. This equilibrium wetting transition presents universal critical properties <cit.>.To elucidate whether a collection of active particles develops wetting transitions similar to those at equilibrium, we study the simple persistent excluding particles (PEP) model <cit.>, where particles interact by excluded volume only. In the PEP model, particles move in a regular lattice at discrete time steps with directions that change randomly (tumble) at a small rate and excluded volume is achieved by imposing a maximum occupation per site.Persistence is characterized by the tumbling rate α, where the correlation length scales as α^-1. Depending on the tumble rate and the maximum occupancy per site,particles can either aggregate in few big clusters after a coarsening process,develop manysmall clusters, or form an homogeneous gas, with nonequilibrium transitions between these phases <cit.>.Here, we show that in the presence of a impenetrable wall, three phases can develop. At low tumbling rates, the particles canpresent total wetting, where a thick film of particles forms. Increasing the tumbling rate, a transition to partial wetting takes place. Here, droplets dispose almost periodically on the wall, increasing their distance when increasing the tumbling rate. Finally, at higher tumbling rates, the wall dewets of particles. These two transitions present critical behavior, and critical exponents for appropriate order parameters are obtained. Model. We consider a two-dimensional realization of the PEP model. A total of N particles move on a regular lattice composed of L_x× L_y sites. Each particle has a state variable 𝐬 ={±𝐱̂, ±𝐲̂}, which indicates the direction to which it points. Time evolves at discrete steps. At each step, particles attempt to jump one site in the direction pointed by 𝐬.If the occupation of the destination site is smaller than the maximum occupancy per site n_max, the jump takes place; otherwise, the particle remains at the original position.n_max models particle overlaps: if n_max=1the interaction between particles is steric, while larger values allow for increasing degrees of overlap. This overlap can account for the deformationof microorganisms or their organization in quasi-two-dimensional layers, which allows larger occupancies in the two-dimensionalprojected plane. The position update of the particles is asynchronous to avoid two particles attempting to jump to the same site simultaneously. That is, at each time step, the N particles are sorted randomly and, sequentially, each particle jumps to the site pointed by 𝐬 depending on its occupation. Finally, at the end of the update phase, tumbles are performed: for each particle, with probability α <cit.>, the director 𝐬 is redrawn at random from the four possibilities, independently of the original value.This model (model 1) is the two-dimensional version of the one presented in Refs. <cit.>.In analogy to hard-core systems, particles only interact via excluded volume effects and there are no energetic penalizations, explicit partial motility reduction as in Motility-Induced Phase Separation <cit.>, or explicit alignment as in Ref. <cit.>.To study the wetting phenomenon, the system is bounded by walls in the x direction, while being periodic in y. Particles pointing to a wall cannot jump to it and remain immobileuntil tumble events take place. For low tumbling rates, there is a high probability that new particles arrive, blocking the first ones. The wall therefore acts as a nucleation site for clusters and, naturally, wets by particles. The wetting film will increase by deceasing α, as has been observed under dilute conditions <cit.>. Here, we investigate if this tendency is smooth or, rather, if transitions take place.The control parameters are the particle density ϕ=N/(L_x L_y), the tumble rate α, and the maximum occupation number n_max. Time is measured in time steps and lengths in lattice sizes; therefore particles have velocities 0, ±𝐱̂, ±𝐲̂. The model has the intrinsic time scale corresponding to the time it takes a particle to cross one site t_cross=1 and also the mean flight time, which depends on density. Tumbling introduces a newtime scale t_tumble=1/α, which must be compared with the previous two, rendering the phase diagram highly nontrivial.Wetting transitions. Simulations for models 1, 2, and 3 are performed intherange of the parametersn_max=1,2,3,4 and 0.025×10^-2≤α≤ 3.02×10^-2, while the number of sites and global density has been fixed to L_x=6000, L_y=1000, and ϕ=0.01. For model 4, 0.025×10^-2≤α≤ 6.23×10^-2, L_x=3000, L_y=500, and ϕ=0.04. Models 2, 3, and 4 are defined below. We have verified by simulations that in absence of walls, the system remains in the gas phase for these parameters. This choice for the system size has the following rationale. First, L_y must be large enough to detect any spatial structuration of the wetting film that will be formed on the walls. Second, if L_x is small, particles that evaporate after a tumble from a wall could arrive to the other wall ballistically, creating artificial correlations between the two walls. To ensure that evaporated particles reach the diffusive regime, one must take L_x≫α^-1.Initially, particles are placed randomly, with random orientations.Finally, we consider total simulation times equal to T=2× 10^6 time steps, which are long enough to reach a steady state and to achieve good statistical sampling. The wetting films are characterized as follows. For each vertical position i, the film column is defined as the contiguous nonempty sides beside the wall (see Fig. <ref>). From this, we measure the instantaneous thickness profile Δ(i) and the total number of particles M(i) in the column. By observing the instantaneous profiles, three distinct phases are recognized [as a reference, Fig. <ref> displays the instantaneous mass profiles M(i) for n_max=3 and various values of α]. For low tumbling rates, the thickness is uniformly greater than 1, corresponding to a phase of total wetting. Increasing α, the average values of M and Δ decrease. For intermediate α, dropletsappear (roughly arranged periodically) where regions of vanishing and finite values of M alternate, corresponding to partial wetting. Note that,contrary to equilibrium wetting transitions, the droplets are 1 or 2 sitesthick and no visible spatial structuration is observed in Δ. Therefore, it is not possible to define and compute a contact angle, nor it is possible to describe them using classical growth models as the Kardar, Parisi, and Zhang equation <cit.>. Finally, for larger values of α, the thickness is small throughout the wall, which is interpreted as a dewettedphase.The emergence of droplets is a consequence of the particles' motility and persistence. When a particle hits the wall, it remains there until a tumble takes place. If, after a tumble, it moves parallel to the wall, it will block when hits another particle, acting then as a nucleation site for droplets. The balance between evaporation and the incoming flux of particles moving along the wall stabilizes the size of droplets.Though the bulk density is small, the density in contact with the wall is large, reaching the maximum value n_max. Varying ϕ changes the transition tumbling rates, but the phenomenology and the morphology of the wetting film and the droplets remain, as is shown for example inmodel 4, where ϕ=0.04.To analyze the first transition, from total to partial wetting, we compute the instantaneous fraction of sites with vanishing thickness, Z=|{i : Δ_i=0}|/L_y.Its temporal average ⟨ Z⟩ is an order parameter that for n_max≥ 2vanishes when total wetting is achieved and presents a continuous transition [see Fig. <ref>(left)]. No transition takes place for n_max=1. Close to the critical point it vanishes as ⟨ Z⟩ = Z_0 (α-α_A)^1.0± 0.1,α ≥α_A,where the critical values are indicate in Table <ref>, while Z_0 are nonuniversal prefactors. The exponent 1.0 is a best fit to power laws, using the protocol described in Ref. <cit.>, which allows for the extraction of the critical tumbling rates. A detailed analysis of the instantaneous values of ⟨Δ⟩ and ⟨ M⟩ shows that the second transition, from partial wetting to dewetting, presents bistability between these two phases, which alternate in the course of time. Therefore, for each value of n_max and α, wemeasure the order parameters every 1000 time steps and build the histograms of the different values they can take. As M and Δ take integer values, no binning is necessary. These probability distributions are shown in Fig. <ref>. For n_max=3,4, the distributions of M show thatthe system presents multistability, with higher probabilities having an integer number of saturated layers (M being a multiple number of n_max). An analysis of the probability distributions shows that, indeed, there are different transitions, when the probabilities of having a certain number of layers vanish (see Fig. <ref>, inset).For n_max=1, no transition takes place, but there is instead a continuous decrease of the different probabilities when increasing α, and the case for n_max=2 is less clear. This behavior is similar to the clustering transitions in one dimension, where clear transitions are also observed for n_max≥ 3; for n_max=1 no transition takes place and n_max=2 corresponds to a crossover <cit.>. We analyze in detail the case when the last layer (M=n_max) disappears, as it corresponds to the transition to dewetting. Figure <ref>(middle) shows that P≡ P(M=n_max) vanishes at a critical tumbling rate α_B, following a power lawP= p_0+p_1(α_B-α)^2.0±0.2,α <α_B,where the exponent 2 is a best fit, p_0 is a small offset probably due to finite-size effects, p_1 is a nonuniversal value, and the critical values are given in Table <ref>.To analyze the spatial structuration in droplets in the partial wetting to dewetting transition, we considerthe mass spatial correlations C(i) ≡⟨ M(i+j) M(j)⟩ -⟨ M⟩^2. The average is done in the steady state over j and time.Figure <ref> displays the correlation function together with instantaneous values of the mass profile M(i), for n_max=3 and various values of α.It is evident that a periodic structure is present for low values of α. These correspond to wetting droplets condensed on the walls. A similar behavior is observed for n_max=4 (not shown). When increasing α, the associated wavelength increases until it saturates with the system size L_y. The position lof the first maximum of the correlation functions, which measures the typical distance between droplets, is analyzed as a function of α for fixed n_max, showing the existence of a critical point α_C. For α<α_C, the critical lawl= l_0- l_1 ln(α_C-α)is obtained, as shown in Fig. <ref>(right). Here l_0 and l_1are nonuniversal prefactors. The critical values in Table <ref> are compatible with the corresponding values of α_B, extracted from P, which characterize the same transition. The difference between both critical tumbling rates is due to finite-size effects and it is a usual effect thatappears when analyzing critical exponents.Universality.To study the universality of the previous results, we consider three variations to model 1. The first variation (model 2) consists of jumps taking place probabilistically as in Ref. <cit.>, where the jump probability depends on the destination site occupation and n_max. In the second variation (model 3), when a tumble takes place, the new director 𝐬 is sorted at random between the two directions perpendicular to the pretumble director, introducing a short time memory in the dynamics. Finally, the third variation (model 4) consists of using a triangular lattice, with a major axis parallel to the wall. In these three cases, we perform the same analysis on the order parameters; we find different values for the nonuniversal coefficients and critical tumble rates, but the exponents are equal to those of model 1 (see Fig. <ref> and Table <ref>). Conclusions and discussion.We have numerically studied the steady states of four models for active matter in presence of an impenetrable wall. Wetting phases are found, analogous to those present in equilibrium molecular fluids. Namely, the system can present total wetting, partial wetting, and dewetting phases. The control parameter is the tumbling rate, which plays an analogous role to temperature, allowing particles to evaporate from the wall. Between these phases, nonequilibrium phase transitions with power-law scaling for the order parameters are obtained.We verified thatthe critical exponents are universal, within the range of lattice models.A comparison with other models for active matter, for example in off-lattice models, will indicate whether the obtained critical laws and the structure of the phase diagram are universal. The possible relation between the clustering transitions in a lower-dimensional space (the wall) reported in Ref. <cit.> and the wetting transitions deserves further study. However, there is a substantial difference, in that the latter presents two consecutive transitions absent in the case of clustering. Finally, different wetting states can haveimportant effects on the early stages of biofilm formation or other collective aggregation at surfaces, as these states differ appreciably in the particle disposition at the surface, creating periodic droplets or a uniform film.Acknowledgments.This research was supported by Fondecyt Grants No. 1151029 (N.S.) and No. 1140778 (R.S.). 99Vicsek2012 T. Vicsek and A. Zafeiris,Collective motion,Phys. Rep. 517, 71 (2012).Marchetti2013 M.C. Marchetti, J.F. Joanny, S. Ramaswamy, T.B. Liverpool, J. Prost, M. Rao, and R. Aditi Simha,Hydrodynamics of soft active matter,Rev. Mod. Phys. 85, 1143 (2013).bialke2015 J. Bialké, T. Speck, and H. Löwen, Active colloidal suspensions: Clustering and phase behavior,J. Non-Cryst. Solids 407, 367 (2015).Berg H.C. Berg.E.Coli in Motion(Springer, New York, 2004).Romanczuk2012 P . Romanczuk, M . Bär, W . Ebeling, B . Lindner, and L. Schimansky-Geier, Active Brownian particles: From individual to collective stochastic dynamics,Eur. Phys. J. Special Topics 202, 1 (2012). solon2015 A.P. Solon, M.E. Cates, and J. Tailleur,Active Brownian particles and run-and-tumble particles: A comparative study,Eur. Phys. J. Special Topics 224, 1231 (2015). Berke2008 A. P. Berke, L. Turner, H. C. Berg, and E. Lauga,Hydrodynamic Attraction of Swimming Microorganisms by Surfaces,Phys. Rev. Lett. 101, 038102 (2008).Elgeti2009 J. Elgeti and G. Gompper, Self-propelled rods near surfaces, Europhys. Lett. 85, 38002 (2009).Dunstan2012J. Dunstan, G. Miño, E. Clement, and R. Soto,A two-sphere model for bacteria swimming near solid surfaces,Phys. Fluids 24, 011901 (2012).Elgeti2013 J. Elgeti and G. Gompper, Wall accumulation of self-propelled spheres Europhys. Lett. 101, 48003 (2013).Molaei2014 M. Molaei, M. Barry, R. Stocker, and J. Sheng, Failed Escape: Solid Surfaces Prevent Tumbling of Escherichia coli, Phys. Rev. Lett. 113, 068103 (2014).Mathijssen2016 A.J.T.M. Mathijssen, A. Doostmohammadi, J.M. Yeomans, and T.N. Shendruk, Hotspots of boundary accumulation: Dynamics and statistics of micro-swimmers in flowing films, J. R. Soc. Interface 13, 20150936 (2016). Li2009G. Li and J. X. Tang,Accumulation of Microswimmers Near a Surface Mediated by Collision and Rotational Brownian Motion,Phys. Rev. Lett. 103, 078101 (2009).Elgeti2015 J. Elgeti and G. Gompper,Run-and-tumble dynamics of self-propelled particles in confinement, Europhys. Lett. 109, 58003 (2015).Ezhilan B. Ezhilan, R. Alonso-Matilla, and D. Saintillan,On the distribution and swim pressure of run-and-tumble particles in confinement,J. Fluid Mech. 781, R4 (2015). Vladescu2014 I. D. Vladescu, E. J. Marsden, J. Schwarz-Linek, V. A. Martinez, J. Arlt, A. N. Morozov, D. Marenduzzo, M. E. Cates, and W. C. K. Poon,Filling an Emulsion Drop with Motile Bacteria,Phys. Rev. Lett. 113, 268101 (2014).Cheng2015 A. Cheng Hou Tsang and E. Kanso,Circularly confined microswimmers exhibit multiple global patterns,Phys. Rev. E 91, 04008 (2015).Wysocki2015 A. Wysocki, J. Elgeti, and G. Gompper, Giant adsorption of microswimmers: Duality of shape asymmetry and wall curvature, Phys. Rev. E 91, 050302 (2015).Sipos2015 O. Sipos, K. Nagy, R. Di Leonardo, and P. Galajda,Hydrodynamic Trapping of Swimming Bacteria by Convex Walls,Phys. Rev. Lett. 114, 258104 (2015). Wensink2008H.H. Wensink and H. Löwen, Aggregation of self-propelled colloidal rods near confining walls, Phys. Rev. E 78, 031409 (2008).Costanzo2012 A. Costanzo, R. Di Leonardo, G. Ruocco, and L Angelani, Transport of self-propelling bacteria in micro-channel flow,J. Phys. Condens. Matter 24, 065101 (2012).Figeroa2015 N. Figueroa-Morales, G.L Miño, A. Rivera, R. Caballero, E. Clément, E. Altshuler, and A. Lindner, Living on the edge: Transfer and traffic of E. coli in a confined flow, Soft Matter 11, 6284 (2015). deGennesP.G. de Gennes,Wetting: Statics and dynamics,Rev. Mod. Phys. 57, 827 (1985).CahnJ.W. Cahn,Critical point wetting,J. Chem. Phys. 66, 3667 (1977).MoldoverM.R. Moldover and J.W. Cahn,An interface phase transition: Complete to partial wetting,Science 207, 1073 (1980).NakanishiH. Nakanishi and M.E. Fisher,Multicriticality of Wetting, Prewetting, and Surface Transitions,Phys. Rev.Lett. 49, 1565 (1982). RG2014 R. Soto and R. Golestanian,Run-and-tumble in a crowded environment: Persistent exclusion process for swimmers,Phys. Rev. E 89, 012706 (2014).SS2016 N. Sepúlveda and R. Soto,Coarsening and clustering in run-and-tumble dynamics with short-range exclusion,Phys. Rev. E 94, 022603 (2016).commentalpha Strictly speaking, for a tumbling rate α,the tumbling probability at each time step is (1-e^-α). For small values, as we consider in this Letter, it is well approximated by α. cates2015 M.E. Cates and J. Tailleur,Motility-induced phase separation,Annu. Rev. Condens. Matter Phys. 6, 219 (2015).peruani2011 F. Peruani, T. Klauss, A. Deutsch, and A. Voss-Boehme, Traffic Jams, Gliders, and Bands in the Quest for Collective Motion of Self-Propelled Particles, Phys. Rev. Lett. 106, 128101 (2011).KPZ M. Kardar, G. Parisi, and Y.-C. Zhang,Dynamic Scaling of Growing Interfaces,Phys. Rev. Lett. 56, 889 (1986). Castillo G. Castillo, N. Mujica, and R. Soto,Fluctuations and Criticality of a Granular Solid-Liquid-Like Phase Transition,Phys. Rev. Lett. 109, 095701 (2012).
http://arxiv.org/abs/1704.08941v2
{ "authors": [ "Néstor Sepúlveda", "Rodrigo Soto" ], "categories": [ "cond-mat.soft", "cond-mat.stat-mech" ], "primary_category": "cond-mat.soft", "published": "20170426150155", "title": "Wetting Transitions Displayed by Persistent Active Particles" }
=1 12∂ α' 12 ()[ ] ^1CAS Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190, China ^2School of Physical Sciences, University of Chinese Academy of Sciences, No. 19A Yuquan Road, Beijing 100049, China The early reionization (ERE) is supposed to be a physical process which happens after recombination, but before the instantaneous reionization caused by the first generation of stars. We investigate the effect of the ERE on the temperature and polarization power spectra of cosmic microwave background (CMB), and adopt principal components analysis (PCA) to model-independently reconstruct the ionization history during the ERE. In addition, we also discuss how the ERE affects the cosmological parameter estimates, and find that the ERE does not impose any significant influences on the tensor-to-scalar ratio r and the neutrino mass at the sensitivities of current experiments. The better CMB polarization data can be used to give a tighter constraint on the ERE and might be important for more precisely constraining cosmological parameters in the future.????Effect of the Early Reionization on the Cosmic Microwave Background and Cosmological Parameter Estimates Qing-Guo Huang[[email protected]] and Ke Wang [[email protected]] December 30, 2023 ======================================================================================================== § INTRODUCTION According to the standard model of cosmology, the universe was almost full of neutral hydrogen and helium after the epoch of recombination. However, many direct measurements of the ionization state of the universe including the Gunn-Peterson effect in QSOs <cit.> and Lyman alpha emission in galaxies <cit.> indicate that intergalactic gas has been almost fully reionized by z∼6 and the universe was no longer neutral at z<10. But we don't know when the transition, so-called cosmic reionization, took place. In the literature there are many different candidates for the sources of the cosmic reionization for different onset of the reionization process. Although star-forming galaxies at 6≲ z<10 are taken as the main agents of reionization <cit.>, Thomson optical depth to electron scattering τ_re derived from star-forming rate ρ_SFR is smaller than that constrained by Planck <cit.>. On the other hand, much higher redshift ionization sources are still allowed. See, for example, <cit.>.There are several possible processes that might had modified the ionization state of the universe at high redshifts: the decay or annihilation of dark matter (DM) transmuted mass of DM into energy, a fraction of which would be deposited into the intergalactic medium (IGM) and then heat, ionize or excite the neutral atoms <cit.>; primordial black holes (PBHs) immersed in an baryon gas will be accreted onto by gas and DM, which produced radiation heating, ionizing or exciting the IGM <cit.>. In fact, not only can a well-understood ionization history of the universe help us to probe the microscopic properties of DM and the abundance of massive PBHs in turn, but also is important for determining cosmological neutrino mass <cit.>, detecting CMB B-Modes from inflationary gravitational waves <cit.>, exploring the large scale anomalies in the CMB <cit.> and testing the single-field slow-roll consistency relation <cit.>.In this paper we investigate the early reionization (ERE) epoch between the standard instantaneous reionization caused by the first generation of stars and recombination. Here we don't specify what physical mechanism causes it, but turn to a model-independent method, called principal component analysis (PCA), to reconstruct the ionization history during the ERE. Although applying PCA to reionization have been done in many works, for instance <cit.>, they only explored the low-redshift region (z<30). Here we focus on the ERE which may happen in the region of 20≲ z<910, and see how the ERE affects the cosmological parameter estimates. This paper is organized as follows. In Sec. <ref>, we sketch out the effects of ERE on the CMB temperature and polarization power spectra. In Sec. <ref>, we utilize PCA to model-independently reconstruct the ionization history during the ERE from Planck 2015 data. In Sec. <ref>, we investigate how the ERE affects the estimates of the tensor-to-scalar ratio and the neutrino mass. Summery and discussion are given in Sec. <ref>. § EFFECTS OF THE EARLY REIONIZATION ON THE CMB POWER SPECTRA The recombination of Helium III to Helium II, Helium II to Helium I and Hydrogen recombination last from z=10^4 to late time, and the ionization fraction defined byx_e(z)≡n_e n_Hdecreased from 1.16 to around 10^-4, where n_e is the number density of free electrons and n_H is the total number density of Hydrogen nuclei. When the first generation of early star-forming galaxies were formed, the reionization of neutral Hydrogen and the first reionization of neutral Helium occur in the intergalactic medium and the ionization fraction is usually supposed to be with a tanh-like increase, namely the instantaneous reionization, x_e(z)=x_e,rec(z),   for z≥ z_beg ; f-x_e(z_beg)/2 1+tanhy(z_re)-y(z)/Δ _y + x_e(z_beg),   for z<z_beg ,where f=1+n_He/n_H=1.08 denotes the ionization fraction of a fully ionized universe, z_re is the redshift when the universe is half reionized, z_beg=z_re+8×Δ_z, Δ_y=1.5√(1+z_re)Δ _z with Δ_z=0.5, y(z)=(1+z)^3/2 is used by <cit.>, and x_e,rec(z) is the ionization fraction of recombination history given by <cit.>. The ERE is supposed to happen before the instantaneous reionzation, and then the ionization fraction for z≥ z_beg should be modified to x_e,rec(z)+Δ x_e(z), where Δ x_e(z) encodes the information about the ERE. Therefore, after considering the ERE, the ionization fraction takes the formx_e(z)=x_e,rec(z)+Δ x_e(z),   for z≥ z_beg ; f-x_e(z_beg)/2 1+tanhy(z_re)-y(z)/Δ _y + x_e(z_beg),   for z<z_beg .The optical depth for Thomson scattering due to the late-time instantaneous reionization is defined byτ_re≡∫^z_beg_0[x_e(z)-x_e,rec(z)]n_H(z) σ_T dz/H(1+z),and the optical depth contributed by the ERE isΔτ≡∫_z_beg^z_*Δ x_e(z)n_H(z) σ_T dz/H(1+z),where σ_T is the Thomson cross section and z_* is the redshift of recombination. The effects on the CMB angular power spectra from the ERE and reionization are encoded in the the photon transfer functions Δ_Tl^(S)(k) and Δ_El^(S)(k) which are obtained by integrating their corresponding source function S_T,E^(S)(k,η) and spherical Bessel function j_l[k(η_0-η)] along the line of sight, <cit.>,Δ_Tl^(S)(k) = ∫_0^η_0dη S_T^(S)(k,η)j_l[k(η_0-η)], S_T^(S)(k,η) = gΔ_T0^(S)+2α̇+v̇_̇ḃ/k+Π/4+3Π̈/4k^2+ e^-τ(κ̇+α̈)+ġα+v_b/k+3Π̇/4k^2+3g̈Π/4k^2, Δ_El^(S)(k) = √((l+2)!/(l-2)!)∫_0^η_0dη S_E^(S)(k,η)j_l[k(η_0-η)], S_E^(S)(k,η) = 3gΠ/4(η_0-η)^2k^2, Π = Δ_T2^(S)+Δ_P2^(S)+Δ_P0^(S),g = -τ̇e^-τ, τ(η) = ∫_η^η_0dη n_eσ_Taα = ḣ+6κ̇/2k^2,where h and κ are the scalar perturbations of metric in the synchronous gauge, v_b is the baryon velocity, η_0 is the conformal time at present, and the dots denote the derivatives with respect to the conformal time η. Since the integral ∫_0^η_0dη g(η)=1, the visibility function g is taken as a probability density that a photon last scattered at η. For the temperature power spectrum, the source function S_T^(S)(k,η) consists of three parts: the anisotropy terms with a factor of g, the integrated Sachs-Wolfe (ISW) term e^-τ(κ̇+α̈) and the anisotropy terms with derivatives of g. Since the x_e evolves smoothly, the anisotropy terms with derivatives of g should not be dominant. The contribution of the ISW effect to the temperature power spectrum is not affected by the modification of the ionization history because the early ISW effect takes place quite before the recombination and the late ISW effect occurs at low redshifts where our universe has been fully reionized. Therefore, the effect of the reionization, including the late instantaneous reionization and the ERE, on the temperature power spectrum mainly comes from the anisotropy terms with a factor of g. At recombination, only monopole makes contribution to the temperature power spectrum on large scales, but all of monopole, dipole and quadrupole contribute to the the temperature power spectrum on small scales. After recombination, even though higher multipoles enter horizon on the intermediate and large scales gradually, their contributions are negligibly small. However, a photon might been scattered by the free electrons due to the reionization. Thus, the temperature power spectrum observed today is suppressed by ∫_0^η_* dη g(η)∼ e^-(τ_re+Δτ). Since the polarization power spectrum on small scales was also formed at recombination, it is suppressed by ∼ e^-(τ_re+Δτ)/(η_0-η_*)^2. However, the quadrupoles scattered by free electrons through Thomson scattering can induce polarization on the intermediate and large scales when they enter horizon gradually after recombination. If the ERE occurs at around η_ere, the polarization power spectrum on the intermediate scales is enhanced by ∼ (e^-τ_re-e^-(τ_re+Δτ))/(η_0-η_ere)^2∼Δτ/(η_0-η_ere)^2. Similarly, on the largest scales (ℓ≲ 10), the late instantaneous reionization enhances thepolarization power spectrum by ∼ (1-e^-τ_re)/(η_0-η_re)^2∼τ_re/(η_0-η_re)^2. In order to explicitly illustrate the effect of the ERE on the CMB power spectra, we keep τ_re+Δτ=0.089 fixed, and consider three different ionization histories: Δ x_e(z)=0 and z_re=11; z_re=8 and z_ere=200; z_re=8 and z_ere=500, where Δ x_e(z)∼1/2[tanh(z_ere-z/10)+1]which are showed in Fig. <ref>.The CMB power spectra without tensor perturbations for these three ionization histories are showed on the left panel of Fig. <ref>. Similar to the scalar perturbations, we can also illustrate the contributions from the tensor perturbations to the CMB power spectra on the right panel of Fig. <ref>.First of all, the temperature power spectra and the polarization power spectra at high ℓ for different ionization histories are almost the same respectively if τ_re+Δτ is kept fixed. It is just what we expect. If the ERE happens, the E-mode polarization power spectra on intermediate scales are enhanced by ∼Δτ/(η_0-η_ere)^2. Since τ_re+Δτ is kept fixed, the enhancements of the polarization power spectra on large scales (ℓ≲ 10) become smaller compared to that without the ERE. Finally, we need to mention that the ERE does not significantly enhance the CMB B-mode power spectrum on the intermediate scales if Δτ is not too large. § A MODEL-INDEPENDENT ANALYSIS OF THE EARLY REIONIZATIONIn this section we will introduce the principal components analysis (PCA) and use this model-independent method to reconstruct the early reionization history. We suppose that the ionization fraction due to the ERE takes the formΔ x_e(z)=∑_i=1^Nα_i1/2[tanh(z_i-z/Δ z)+1],where Δ z is the spacing between the {z_i}, and {α_i} are the coefficients. Here we assume that the ERE may happen in the range of 10<z<910, and take N=89 and Δ z=10. It implies that there are nighty redshift-bins covering this redshift range,z_1=20 and z_89=900.Adopting Eq. (<ref>), we can compute the effect of nonzero {α_i} on the anisotropy spectrum by ∂ln C_ℓ/∂α_i under a fiducial model in which the cosmological parameters are listed in Tab. <ref>.According to the discussion in the former section, we notice that the E-mode polarization power spectrum is sensitive to the ERE, and therefore we focus on the matrix of ∂ln C_ℓ^EE/∂α_i which are showed in Fig. <ref>.In principle, in turn, we should use E-mode polarization power spectrum to estimate the {α_i}. However, in practice, the large number of free parameters including usual CMB parameters and {α_i} make parameter estimation impossible in a likelihood analysis. Fortunately, we can turn to the Fisher matrix of all-sky polarization experiment for the EREF_ij=∑_ℓ(ℓ+1/2)∂ln C_ℓ^EE/∂α_i∂ln C_ℓ^EE/∂α_j.Diagonalizing the matrix F by an orthogonal matrix S Λ=S^TFS,the old basis of {1/2[tanh(z_i-z/Δ z)+1]} and the new basis of {b_μ(z)} can be related to S byb_μ(z)=∑_i^N1/2[tanh(z_i-z/Δ z)+1]S_iμ,and Δ x_e(z) can be represented by the new basis of {b_μ} asΔ x_e(z)=∑_μ^N_ββ_μb_μ(z).Since the i-th column of S is the eigenvector of F corresponding to the eigenvalue λ_i, we can fix S by ordering {λ_i} to be λ_1>λ_2>...>λ_N_β. The largest eigenvalues contain the most information of the ERE. The fist five and last five new basis are illustrated in Fig. <ref>.Now we are ready to reconstruct the ionization history during the ERE. Utilizing the new parameterization of Δ x_e(z) in Eq. (<ref>), we can use Planck TT,TE,EE+lowP released in 2015 <cit.> to constrain the coefficients {β_μ} of the basis {b_μ}. Here we consider three cases: N_β=1 denoted by _β_1ΛCDM model; N_β=3 denoted by _β_3ΛCDM model; N_β=5 denoted by _β_5ΛCDM model. Note that the values of {β_μ} in every model must satisfy the condition of 0≤ x_e(z)≲1.16. Our results are included in Tab. <ref> and Figs. <ref>, <ref> and <ref>.From these results, there is no evidence for the ERE and the instantaneous reionization is quite consistent with the data. At 95% confidence level, the limits on Δτ are Δτ<0.007 for _β_1ΛCDM model, Δτ<0.022 for _β_3ΛCDM model, and Δτ<0.031 for _β_5ΛCDM model, respectively. Finally, for an instance, the ionization history for _β_3ΛCDM model are illustrated in Fig. <ref>.From Fig.  <ref>, there is still a room for the ERE in the range of 10<z≲ 500. § EFFECT OF THE EARLY REIONIZATION ON THE COSMOLOGICAL PARAMETER ESTIMATES In this section we investigate how the ERE affects the cosmological parameter estimates. Because the ERE mainly disturbs the CMB polarization power spectra on the intermediate and large scales, we take the ERE into account and explore how it will modify the constraints on the tensor-to-scalar ratio r and the neutrino mass respectively. Our results are summarized in Tab. <ref>.Primordial gravitational waves can be generated during inflation in the very early universe, and the amplitude of gravitational-wave power spectrum is parametrized by the so-called tensor-to-scalar ratio r. The primordial gravitational waves can contribute to the CMB B-modes mainly on the intermediate and large scales. In the ΛCDM+r model with instantaneous reionization at low redshift and the pivot scale k_p=0.01Mpc^-1, the constraint on r isr_0.01<0.071at 95% confidence level (CL) by combining Planck TT,TE,EE+lowP+lensing, BICEP2 &Keck Array<cit.> and BAO including 6dFGS <cit.>, MGS <cit.>, LOWZ and CMASS of BOSS DR12 <cit.>. Furthermore, we consider an extended cosmological models, namely _β_3ΛCDM+r model, and see how the ERE affects the constraint on the tensor-to-scalar ratio r. The results isr_0.01<0.074at 95%CL in _β_3ΛCDM+r model. The contour plots show up in Fig. <ref>.We find that the constraints on r and n_s in the model with ERE do not significantly change compared to those in the model with the instantaneous reionization.The main signature of massive neutrinos in the CMB comes about via the early ISW effect, and it is worthy considering how the ERE affects the constraint on the neutrino mass. Again we constrain the neutrino mass in the instantaneous reionization model and find∑ m_ν<0.140 eVat 95% CL by combining Planck TT,TE,EE+lowP and BAO datasets. The constraints become∑ m_ν<0.139 eVat 95% CL in _β_3ΛCDM+∑ m_ν model. See the contour plots in Fig. <ref> and constraints on the free parameters in Tab. <ref>.We see that the constraint on the neutrino mass becomes just slightly tighter in _β_3ΛCDM+∑ m_ν model than that in the instantaneous reionization model. § SUMMARY AND DISCUSSIONEven though the instantaneous reionization at redshift less than ten is consistent with the data, the possibility of a reionization which occurs at higher redshifts but after recombination due to the accretion of gas onto primordial black holes and/or annihilation of dark matter etc is still allowed. In this paper we find that the ERE mainly disturbs the CMB polarization power spectra on the intermediate and large scales if the total optical depth is kept fixed. Adopting the Planck polarization data, we model-independently reconstruct the ionization history during the ERE, and find that an order of 10^-2 optical depth due to the ERE is still allowed.In addition, we also explore how the ERE affects the cosmological parameter estimates. Because both the tensor perturbations and the neutrino mass disturb the CMB power spectra on the intermediate and large scales, we take the ERE into account and constrain the tensor-to-scalar ratio and the neutrino mass by adopting the currently available cosmological data. We find that the ERE does not significantly change the constraints on cosmological parameters at the sensitivities of current experiments. However, a tighter constraint on the ERE might be important if we want to more precisely constrain the cosmological parameters in the future. AcknowledgmentsWe acknowledge the use of HPC Cluster of SKLTP/ITP-CAS. This work is supported by grants from NSFC (grant NO. 11335012, 11575271, 11690021), Top-Notch Young Talents Program of China, and partly supported by Key Research Program of Frontier Sciences, CAS. 99Fan:2005es X. H. Fanet al.,Astron. J.132, 117 (2006)doi:10.1086/504836[astro-ph/0512082]. McGreer:2014qwa I. McGreer, A. Mesinger and V. D'Odorico,Mon. Not. Roy. Astron. Soc.447, no. 1, 499 (2015)doi:10.1093/mnras/stu2449[arXiv:1411.5375 [astro-ph.CO]]. Schroeder:2012uy J. Schroeder, A. Mesinger and Z. Haiman,Mon. Not. Roy. Astron. Soc.428, 3058 (2013)doi:10.1093/mnras/sts253[arXiv:1204.2838 [astro-ph.CO]]. Pentericci:2014nia L. Pentericciet al.,Astrophys. J.793, no. 2, 113 (2014)doi:10.1088/0004-637X/793/2/113[arXiv:1403.5466 [astro-ph.CO]]. Schenker:2014tda M. A. Schenker, R. S. Ellis, N. P. Konidaris and D. P. Stark,Astrophys. J.795, no. 1, 20 (2014)doi:10.1088/0004-637X/795/1/20[arXiv:1404.4632 [astro-ph.CO]]. Tilvi:2014oia V. Tilviet al.,Astrophys. J.794, no. 1, 5 (2014)doi:10.1088/0004-637X/794/1/5[arXiv:1405.4869 [astro-ph.CO]]. Robertson:2013bq B. E. Robertsonet al.,Astrophys. J.768, 71 (2013)doi:10.1088/0004-637X/768/1/71[arXiv:1301.1228 [astro-ph.CO]]. Robertson:2015uda B. E. Robertson, R. S. Ellis, S. R. Furlanetto and J. S. Dunlop,Astrophys. J.802, no. 2, L19 (2015)doi:10.1088/2041-8205/802/2/L19[arXiv:1502.02024 [astro-ph.CO]]. Ade:2015xua P. A. R. Adeet al. [Planck Collaboration],Astron. Astrophys.594, A13 (2016)doi:10.1051/0004-6361/201525830[arXiv:1502.01589 [astro-ph.CO]]. Heinrich:2016ojb C. H. Heinrich, V. Miranda and W. Hu,arXiv:1609.04788 [astro-ph.CO]. Miranda:2016trf V. Miranda, A. Lidz, C. H. Heinrich and W. Hu,arXiv:1610.00691 [astro-ph.CO]. Slatyer:2009yq T. R. Slatyer, N. Padmanabhan and D. P. Finkbeiner,Chluba:2009uv J. Chluba,Mon. Not. Roy. Astron. Soc.402, 1195 (2010)doi:10.1111/j.1365-2966.2009.15957.x[arXiv:0910.3663 [astro-ph.CO]]. Finkbeiner:2011dx D. P. Finkbeiner, S. Galli, T. Lin and T. R. Slatyer,Phys. Rev. D85, 043522 (2012)doi:10.1103/PhysRevD.85.043522[arXiv:1109.6322 [astro-ph.CO]]. Liu:2016cnk H. Liu, T. R. Slatyer and J. Zavala,Phys. Rev. D94, no. 6, 063507 (2016)doi:10.1103/PhysRevD.94.063507[arXiv:1604.02457 [astro-ph.CO]]. Oldengott:2016yjc I. M. Oldengott, D. Boriero and D. J. Schwarz,JCAP1608, no. 08, 054 (2016)doi:10.1088/1475-7516/2016/08/054[arXiv:1605.03928 [astro-ph.CO]]. Ricotti:2007au M. Ricotti, J. P. Ostriker and K. J. Mack,Astrophys. J.680, 829 (2008)doi:10.1086/587831[arXiv:0709.0524 [astro-ph]].Chen:2016pud L. Chen, Q. G. Huang and K. Wang,JCAP1612, no. 12, 044 (2016) doi:10.1088/1475-7516/2016/12/044 [arXiv:1608.02174 [astro-ph.CO]]. Ali-Haimoud:2016mbv Y. Ali-Haimoud and M. Kamionkowski,Phys. Rev. D95, no. 4, 043534 (2017) doi:10.1103/PhysRevD.95.043534 [arXiv:1612.05644 [astro-ph.CO]].Smith:2006nk K. M. Smith, W. Hu and M. Kaplinghat,Phys. Rev. D74, 123002 (2006)doi:10.1103/PhysRevD.74.123002[astro-ph/0607315]. Allison:2015qca R. Allison, P. Caucal, E. Calabrese, J. Dunkley and T. Louis,Phys. Rev. D92, no. 12, 123535 (2015)doi:10.1103/PhysRevD.92.123535[arXiv:1509.07471 [astro-ph.CO]]. Kamionkowski:2015yta M. Kamionkowski and E. D. Kovetz,doi:10.1146/annurev-astro-081915-023433arXiv:1510.06042 [astro-ph.CO]. Mortonson:2009xk M. J. Mortonson and W. Hu,Phys. Rev. D80, 027301 (2009)doi:10.1103/PhysRevD.80.027301[arXiv:0906.3016 [astro-ph.CO]]. Mortonson:2009qv M. J. Mortonson, C. Dvorkin, H. V. Peiris and W. Hu,Phys. Rev. D79, 103519 (2009)doi:10.1103/PhysRevD.79.103519[arXiv:0903.4920 [astro-ph.CO]]. Mortonson:2007tb M. J. Mortonson and W. Hu,Phys. Rev. D77, 043506 (2008)doi:10.1103/PhysRevD.77.043506[arXiv:0710.4162 [astro-ph]]. Hu:2003gh W. Hu and G. P. Holder,Phys. Rev. D68, 023001 (2003)doi:10.1103/PhysRevD.68.023001[astro-ph/0303400]. Dai:2015dwa W. M. Dai, Z. K. Guo and R. G. Cai,Phys. Rev. D92, no. 12, 123521 (2015)doi:10.1103/PhysRevD.92.123521[arXiv:1509.01501 [astro-ph.CO]]. Liu:2015gho Y. Liu, H. Li, S. Y. Li, Y. P. Li and X. Zhang,JCAP1602, no. 02, 046 (2016)doi:10.1088/1475-7516/2016/02/046[arXiv:1512.07394 [astro-ph.CO]]. Lewis:2008wr A. Lewis,Phys. Rev. D78, 023002 (2008) doi:10.1103/PhysRevD.78.023002 [arXiv:0804.3865 [astro-ph]]. Seager:1999bc S. Seager, D. D. Sasselov and D. Scott,Astrophys. J.523, L1 (1999)doi:10.1086/312250[astro-ph/9909275]. Seljak:1996is U. Seljak and M. Zaldarriaga,Astrophys. J.469, 437 (1996)doi:10.1086/177793[astro-ph/9603033]. Zaldarriaga:1996xe M. Zaldarriaga and U. Seljak,Phys. Rev. D55, 1830 (1997)doi:10.1103/PhysRevD.55.1830[astro-ph/9609170]. Array:2015xqh P. A. R. Adeet al. [BICEP2 and Keck Array Collaborations],Phys. Rev. Lett.116, 031302 (2016) doi:10.1103/PhysRevLett.116.031302 [arXiv:1510.09217 [astro-ph.CO]]. Beutler:2011hx F. Beutleret al.,Mon. Not. Roy. Astron. Soc.416, 3017 (2011) doi:10.1111/j.1365-2966.2011.19250.x [arXiv:1106.3366 [astro-ph.CO]]. Ross:2014qpa A. J. Ross, L. Samushia, C. Howlett, W. J. Percival, A. Burden and M. Manera,Mon. Not. Roy. Astron. Soc.449, no. 1, 835 (2015) doi:10.1093/mnras/stv154 [arXiv:1409.3242 [astro-ph.CO]]. Cuesta:2015mqa A. J. Cuestaet al.,Mon. Not. Roy. Astron. Soc.457, no. 2, 1770 (2016) doi:10.1093/mnras/stw066 [arXiv:1509.06371 [astro-ph.CO]]. Gil-Marin:2015nqa H. Gil-Marnet al.,Mon. Not. Roy. Astron. Soc.460, no. 4, 4210 (2016) doi:10.1093/mnras/stw1264 [arXiv:1509.06373 [astro-ph.CO]].
http://arxiv.org/abs/1704.08495v2
{ "authors": [ "Qing-Guo Huang", "Ke Wang" ], "categories": [ "astro-ph.CO", "gr-qc", "hep-ph", "hep-th" ], "primary_category": "astro-ph.CO", "published": "20170427101636", "title": "Effect of the Early Reionization on the Cosmic Microwave Background and Cosmological Parameter Estimates" }
apj apj
http://arxiv.org/abs/1704.08697v1
{ "authors": [ "Anthony L. Piro", "Bruno Giacomazzo", "Rosalba Perna" ], "categories": [ "astro-ph.HE" ], "primary_category": "astro-ph.HE", "published": "20170427180005", "title": "The Fate of Neutron Star Binary Mergers" }
[email protected] Physikalisch-Technische Bundesanstalt (PTB),Abbestr. 2-12, 10587 Berlin, Germany Physikalisch-Technische Bundesanstalt (PTB),Abbestr. 2-12, 10587 Berlin, Germany Physikalisch-Technische Bundesanstalt (PTB),Abbestr. 2-12, 10587 Berlin, Germany Physikalisch-Technische Bundesanstalt (PTB),Abbestr. 2-12, 10587 Berlin, Germany Helmholtz-Zentrum Berlin (HZB), Albert-Einstein-Str. 15, 12489 Berlin, Germany Physikalisch-Technische Bundesanstalt (PTB),Abbestr. 2-12, 10587 Berlin, Germany Physikalisch-Technische Bundesanstalt (PTB),Abbestr. 2-12, 10587 Berlin, Germany Physikalisch-Technische Bundesanstalt (PTB),Abbestr. 2-12, 10587 Berlin, Germany Laterally periodic nanostructures were investigated with grazing incidence small angle X-ray scattering (GISAXS) by using the diffraction patterns to reconstruct the surface shape. To model visible light scattering, rigorous calculations of the near and far field by numerically solving Maxwell's equations with a finite-element method are well established. The application of this technique to X-rays is still challenging, due to the discrepancy between incident wavelength and finite-element size. This drawback vanishes for GISAXS due to the small angles of incidence, the conical scattering geometry and the periodicity of the surface structures, which allows a rigorous computation of the diffraction efficiencies with sufficient numerical precision.To develop dimensional metrology tools based on GISAXS, lamellar gratings with line widths down to 55 nm were produced by state-of-the-art e-beam lithography and then etched into silicon. The high surface sensitivity of GISAXS in conjunction with a Maxwell solver allows a detailed reconstruction of the grating line shape also for thick, non-homogeneous substrates.The reconstructed geometrical line shape models are statistically validated by applying a Markov chain Monte Carlo (MCMC) sampling technique which reveals that GISAXS is able to reconstruct critical parameters like the widths of the lines with sub-nm uncertainty. Reconstructing Detailed Line Profiles of Lamellar Gratings from GISAXS Patterns with a Maxwell Solver F. Scholze December 30, 2023 =====================================================================================================§ INTRODUCTIONMeasurements on the length scale of several nm are challenged by the atomic granularity of matter and by structures which cannot easily be described with simple models. The continuously shrinking patterns in the semiconductor industry are at the forefront of technological development regarding the requirements for size reproducibility and regularity. Scanning probe microscopy techniques (e.g. atomic force microscopy (AFM), scanning tunnelling microscopy (STM), scanning near-field optical microscopy (SNOM), scanning electron microscopy (SEM)) are powerful tools for the investigation of nanostructured surfaces and occupy the key positions for metrology tools in industry. However, in particular X-ray scattering is also an established technique in nanoscience. Grazing incidence small angle X-ray scattering (GISAXS) <cit.> also offers comparably fast measurements, in addition to being destruction-free. With incidence angles close to the critical angle α_c of total external reflection, GISAXS is a technique with high surface sensitivity and is perfectly suited for in-situ applications. Due to the large photon beam footprint as compared to the nm pattern size, it directly yields statistical information on fluctuations such as structure roughness for a large structured area.However, the long beam footprint prevents probing specific small sample volumes in the illuminated area. A challenge for GISAXS is the characterization of structured surfaces with complex periodic and non-periodic sample layouts (e.g. photomasks). Changes in the design of the sample layout <cit.> allow to overcome this problem, but this results also in an extreme loss in scattering intensity which eliminates one of the major advantages of GISAXS for in-situ applications. In contrast to GISAXS, transmission SAXS <cit.> (CDSAXS) is able to probe much smaller sample volumes, only limited by the overall beam dimension. The drawback of the transmission geometry results from the fact that a significant portion of the incident beam may be absorbed in the substrate. As the incidence angle moves away from normal incidence, the beam path length increases and for typical silicon wafers X-ray photon energies above 13 keV are needed for sufficient transmission<cit.>. A combination of grazing incidence reflection and transmission measurements <cit.> (GTSAXS) at the sample edges is often not practical because typical samples are not structured to the very edge. Therefore, GISAXS is ideally suited for the characterization of nano structured surfaces on non-homogeneous substrates. Several groups have already performed GISAXS measurements on gratings for co-planar <cit.> and conical scattering geometry <cit.>. The diffraction patterns are well understood. The intensity modulation of the scattered orders is related to the line shape (form factor) of the grating. To reconstruct the line shape parameters, including the line width, the sidewall angle or the line height, the so-called inverse problem has to be solved. The corresponding calculations for GISAXS were mostly done using the distorted wave Born approximation (DWBA) including an analytic expression of the form factor <cit.>. Arbitrarily shaped structures must be discretized and require numerical solvers <cit.>. In the optical domain of the electromagnetic spectrum, modelling the light scattering by numerically solving the time-harmonic Maxwell's equations with a higher-order finite-element method <cit.> is well established. If the periodic structures are invariant in one dimension (i.e. along the grating lines), the computational domain can be reduced to two dimensions whichsignificantly decreases the computational effort and also allows the calculation of rather large domains as compared to the incident wavelength. This enables, together with the shallow incident angle in GISAXS, the application of this approach to short wavelengths as in X-ray scattering. Solving an inverse problem means minimizing the difference between the measured scatter intensities and the simulated intensities by adapting the geometrical parameters. A statistical validation of the optimized models is possible with the Markov chain Monte Carlo (MCMC) approach <cit.>, which gives the opportunity to obtain parameter sensitivities and delivers confidence intervals.Besides the regular diffraction pattern from a line grating, the diffuse scattering background in the GISAXS pattern reveals additional information about the surface. The appearance of long-range ordered superstructures in the scattering pattern yields information about the e-beam fabrication process. Roughness and imperfections of the surface structures produce a complex diffuse scattering background, which can be described by kinematic and dynamic scattering effects. These effects are correlated with the grating line shape and also allow the determination of geometrical parameters <cit.>.§ EXPERIMENTAL DETAILS The basic geometry of GISAXS is shown in Fig. <ref>. A monochromatic X-ray beam idealized as a plane wave with the wave vector k⃗_i impinges on the sample surface at a grazing incidence angle α_i. The elastically scattered wave with the wave vector k⃗_f propagates along the exit angle α_f and the azimuthal angle θ_f. Here we use the common notation for the scattering vector q⃗=k⃗_f - k⃗_i([ q_x; q_y; q_z ])=( [ k_0 (cos(θ_f)cos(α_f)-cos(α_i));k_0 (sin(θ_f)cos(α_f)); k_0 ( sin(α_f)+sin(α_i) ) ])using angular coordinates (see Fig. <ref>) and k_0 = 2π / λ with λ as the wavelength. Scattering from a periodically structured surface, e.g. a grating, leads to a characteristic diffraction pattern <cit.>. The GISAXS pattern from a line grating observed with a 2D detector can be easily understood by a reciprocal space construction <cit.>. The reciprocal space of a line grating, with pitch p, consists of truncation rods which are perpendicular to the surface and aligned with a spacing Δ q_y in the direction perpendicular to the scattering plane. The scattering pattern on the detector arises as a semicircle of discrete diffraction orders with Δ q_y = 2π/p which follows from the intersection of the Ewald sphere with a radius of k_0 and the reciprocal space representation of the lattice at q_x=0(see Fig. <ref>). The GISAXS experiments were conducted at the four crystal monochromator beamline, operated by the Physikalisch-Technische Bundesanstalt (PTB), at the electron storage ring BESSY II in Berlin <cit.>. The beamline offers a photon energy from E= 1.75 keV to E= 10 keV. By using a beam-defining 0.52 mm pinhole about 150 cm before the sample position and a scatter guard 1 mm pinhole about 10 cm before the sample, we reach a beam spot size of about 0.5 x 0.5 mm at the sample with minimal parasitic scattering. A 6-axis goniometer installed in a UHV chamber allows the alignment of the scattering angle α_i with an accuracy of ± 0.001^∘. The grating lines were aligned in parallel orientation (φ =0^∘) with respect to the incident beam with an accuracy of ± 0.002^∘. An in-vacuum PILATUS 1M hybrid pixel detector is installed on a movable sledge which allows the sample-detector distance to be varied from 1.7 m up to 4.5 m <cit.>. The detector features 20 bit counters for every pixel and consists of 10 separate modules. To extend the accessible photon energy range to the soft X-ray region (EUV), small angle X-ray scattering was also performed at the soft X-ray beamline <cit.>. An Andor CCD camera was mounted on a UHV chamber with a fixed sample detector distance of 0.75 m, which allows the performance of extreme ultraviolet small angle scattering (EUV-SAS) at steeper incident angles α_i of 7^∘.Two lamellar gratings were manufactured by electron beam lithography. Grating G1 with a nominal pitch of 100 nm and a line width of 55 nm and grating G2 with a nominal pitch of 150 nm and a line width of 65 nm (see Table <ref> and Fig. <ref>). The grating areas measure 1 mm by 15 mm (grating G1) and 0.51 mm by 4 mm (grating G2) with the lines oriented parallel to the long edge. To manufacture the gratings, a silicon substrate was spin coated with the positive resist ZEP520A (organic polymer). Pattern generation was done using a Vistec EBPG5000+ e-beam writer, operated with an electron acceleration voltage of 100 kV <cit.>. After resist development, the grating was etched into the silicon substrate via reactive ion etching, using the etching gases SF_6 and C_4F_8. Finally, the remaining resist was removed with oxygen plasma treatment. Grating G1 was fabricated in 2013 and grating G2 in 2016 using a slightly different etching process. The resulting line shape is very sensitive to the conditions in the reactive ion etching chamber, which changed slightly due to other processes running in the same chamber between the manufacturing of the two samples, leading to the differences in geometry seen in Table <ref> and Fig. <ref>.§ SUPERSTRUCTURESBesides the intense diffraction spots from the grating structure (see Fig. <ref>), the GISAXS pattern also shows a dominant superstructure in the (q_y,q_z) pattern as additional diffraction semicircles around the main grating diffraction semicircle at q_x = 0 (see Fig. <ref> a)). The non-periodic shifting of the superstructures in the q_z direction is clear evidence that the periodic modulation must be lateral along q_x. This behaviour is also illustrated in Fig. <ref>. A periodic modulation with the pitch P of the grating in the lateral x direction leads to additional grating truncation planes with Δ q_x = 2π/P. This periodicity appears in the detector plane as additional GISAXS diffraction semicircles with the same Δ q_y periodicity for the diffraction orders but including the Δ q_x spacing of the lateral modulation. For the evaluation of the lateral periodicity, the corresponding (q_y,q_x) projection must be calculated. With grazing incidence angles around 1^∘ and a photon energy of 6.5 keV, the observable scattering vector component q_x is in the order of μm^-1, corresponding to μm sized structures on the sample surface (cf. Fig. <ref> b)). This allows the extraction of the lateral pitch size (along the grating lines) asΔ x = 4.5 μm for the superstructure. This superstructure is directly related to the stitching field size of the e-beam writer (4.53× 4.53) μm^2. The lateral pitch size of the superstructure perpendicular to the scattering plane is not visible in the GISAXS scattering pattern, due to the different range for the q_y scattering vector, but can be obtained with EUV-SAS at α_i = 7^∘ and photon energies around E = 80 eV (see Fig.  <ref> c)).Thus, the measurement demonstrates that small angle scattering techniques are also able to extract long-range ordering of nm structured surfaces due to the elongated beam footprint. The finite-element approach in the X-ray spectral range, however, is limited to a two dimensional domain assuming a perfect infinite grating along the scattering direction. Such long-range perturbations can only be described in the model as a form of roughness. § COMPUTATIONAL DIFFRACTION INTENSITIESThe reconstruction of the geometrical layout of nm sized structures by measuring the intensity distribution of the scattered photons means solving the inverse problem. In an iterative process, a theoretical model is used to compute the scattered intensity from a guess about the structure and compares the measured intensities with the simulation.In the last decade CDSAXS was developed as a powerful tool for the characterization of periodic nano structured surfaces. One major advantage of all transmission SAXS experiments, lies in the fact that a theoretical modelling with the first Born approximation (BA) is suitable for a surface shape reconstruction, as demonstrated by several groups<cit.>. The incoming wave scatters only once at the target before forming the scattered wave. With an analytic description of the line shape (form factor) this modelling approach is relatively fast.With thick and non-homogeneous substrates the GISAXS measurement method is an interesting option for surface characterization. With decreasing incidence angle α_i the scattered amplitude increases while the transmission decreases. Incidence angles close to the external reflection allow to probe targets with a high surface sensitivity due to the comparably small penetration depths. This requires an extension of the BA, due to the fact that the BA neglects any multiple scattering effects. For GISAXS the distorted wave Born approximation is well developed in several Software packages (IsGisaxs<cit.>, BornAgain<cit.>, HipGISAXS<cit.>, FitGisaxs<cit.>) which are able to deal with second order multiple scattering effects. They extend the BA with additional scattering contributions from different multiple scattering events. The DWBA is able to explain a lot of GISAXS measurements, but higher order scattering effects (e.g. higher order Yoneda scattering<cit.>) are not covered in the typical implementation of the DWBA. The biggest advantages of both perturbation theories originate from the simplicity of the analytic form factor description and the resulting computational speed. This strong advantage reduces for arbitrary form factors, which must be discretized numerically<cit.>. Beside those perturbation theories, solving the Maxwell's equation is also possible with a finite-element approach. Arbitrarily shaped objects can be implemented easily as vector functions and parametrized. In contrast to the approximation methods, a Maxwell solver allows the computation of real far field scattering intensities, including the solution of the near field in the computational domain. Higher order scattering effects can be easily studied with the computed electromagnetic field distributions inside the scatter objects. Additionally, the computation of local field distributions gives rise to the simulation of depth depended absorption measurements, e.g. grazing incidence X-ray fluorescence (GIXRF). Coupled with the strong increase in the available computational power in the last decade, solving the Maxwell's equations based on finite-elements is an interesting option in the X-ray spectral range. It is still limited to specific experimental settings (small angles, infinite gratings), i.e. an expansion to CDSAXS is not applicable at the moment, but possible future applications are only limited by the computational power. In the following we will give only a rather compact introduction on the topic of finite-element discretization. A detailed summary can be found elsewhere<cit.>.X-rays are treated as electromagnetic plane waves ofwavelength λ= h c_0 / E (with the Planck constant h, speed of light c_0 and photon energy E) which scatter on nano-structures. In this case, the set of Maxwell's equations can be rewritten to a single, second order curl-curl equation for the electric field<cit.>.The general idea of the finite element discretization is that the computational domain is subdivided into small patches (e.g. triangles). On these patches a vectorial ansatz function is defined usually with polynomials with a fixed order p. The approximate electric field solution is the superposition of these local ansatz functions. The numerical accuracy of the approximate electric field is a functional of the size of the finite-elements and the degree p of the polynomials.To compute diffraction efficiencies, we assume periodic structureswhich are invariant in one dimension (along the grating lines).Well-converged solutions are obtained with the package JCMsuite <cit.> using a higher-order finite-element method. We focus on a reconstruction of the form factor (the line shape) and not the structure factor (the grating pitch) which is well known from lithography <cit.>. Therefore, we fixed the pitch of the structures to the nominal values in the reconstruction process.In Fig. <ref>, a computational domain and the corresponding near field calculation are shown for a typical GISAXS measurement geometry (α_i = 0.86^∘, E = 5.5 keV). The sketched line profile of the grating emphasizes the important structural parameters which were optimized in the reconstruction: the line width, the line height, the sidewall angle, the top corner rounding and the groove rounding. The line width of the structure is defined at the half-height. The parameter of the top corner rounding describes a circle with the respective radius. The bottom groove rounding is constructed with an elliptical shape. The major radius depends on the line width and the sidewall angle, the minor radius is parameterized for the reconstruction. Due to the reactive ion etching process, one could expect a strong groove rounding and only a minor rounding of the top corners because the top surface is still covered by the resist during etching (also visible in the SEM cross-section images, see Fig. <ref>). This model allows an adequate description of the expected line shape, with a low number of parameters. Roughness or imperfections of the lamellar gratings cannot be modelled directly with the finite-element approach due to the small discretization length required in the X-ray region. The computational domain size would go beyond any available computer memory. To account for the line edge or line width roughness damping effects on the diffraction intensities I of higher diffraction orders, an analytic correction must be applied. The line edge and line width roughness (LER/LWR) of the gratings were taken into account, in the optimization process, with the well-known analytic approach <cit.> based on Debye-Waller damping I = I^model·exp(-σ_r^2 q_y^2).This allows the correction of the undisturbed computational diffraction intensities I^model from the Maxwell solver in a post-process. The damping factor σ_r of the rms line roughness was also included in the optimization process.§ UNCERTAINTY EVALUATION The grazing incidence conical diffraction and the invariance of the grating in the direction of the scattering plane result in a standing wave field with a much larger period than the wavelength (see Fig. <ref>). This allows a significant increase in the size of the required discretization length d. It breaks the conventional rule of half the incident wavelength for the discretization to ensure numerical accuracy. This allows the use of a Maxwell solver based on the finite-element method to efficiently treat GISAXS applications.To ensure numerical accuracy with the finite angle of incidence, the relative numerical error of the simulated diffraction intensities (far field) was calculated for different numerical precision settings. The two main numerical degrees of freedom are the spatial discretization length d and the polynomial degree p which defines the ansatz function used to approximate the fields. The numerical errors are defined as the difference of the actual diffraction intensities I^model to the quasi-exact results I^quasi. The quasi-exact calculation is defined as the converged computation with the highest achievable numerical precision settings, typically limited by the amount of available computational memory. The numerical errors for typical GISAXS settings are shown in Figs. <ref> a) and b) for a photon energy of E=8 keV and varying grazing incidence angles α_i. Similar results can be obtained for the azimuthal angle φ. This figure reveals the coupling of d and p to the incident angle α_i and allows the trade-off between the numerical precision and the computational effort to be estimated. For example, a decrease in the incident photon energy would shift the numerical precision in the figure towards larger incident angles. In this rough estimation, the size of the scatter objects represents a natural border of the discretization length d, but modern implementations of the finite-element method allow to make use of adaptive meshing algorithms<cit.> which are able to tune the local discretization length, e.g. in critical regions like the vacuum-silicon interface. However, the obvious gap between the incident wavelength (λ≪ 1 nm) and the sufficient discretization length d gives the opportunity for fast (below 1 s) and accurate simulations with a Maxwell solver based on the finite-element method.It should be noted that in Figs. <ref> a) and b), only the mean absolute numerical error (MANE) of all diffraction orders is visualizedMANE = 1/M( ∑_N|I^quasi_N-I^model_N|/I^quasi_N),with M being the number of diffraction orders and N the specific diffraction order. This hides the fact that the numerical error for the different diffraction orders is coupled with the geometrical layout and the actual experimental settings.Diffraction orders with an exit angle α_f, which is close to the critical angle of the substrate or the grating effective layer (Yoneda band) <cit.>, are most affected. The number of usable diffraction orders is critical for the reconstruction process, as it directly limits the possible complexity of the reconstructed model. To achieve an upper estimation of the numerical errors, we simulated 1000 randomly distributed geometrical grating layouts within the parameter boundaries for the extraction of the quasi-exact diffraction intensities. The identical grating layouts were simulated with the numerical discretization which will later be used in the specific grating reconstruction process (see Table <ref>). We then compared every diffraction intensity with the corresponding quasi-exact computation.The results of this numerical error estimation for the reconstruction of grating G2 are exemplified visually inFigs. <ref> c) and d) for discretization lengths of 5 nm and 6 nm. The dashed red lines mark the σ interval. These σ values in the range of several percent reveal that numerical precision has a significant impact in the uncertainty evaluation. To achieve a better reconstruction result, the highest diffraction ordersshould be neglected in the fitting process due to the numerical instabilities at the critical angle (c.f. Fig. <ref> a) Yoneda band). We also excluded the diffraction orders which are not used in the subsequent reconstruction process (e.g. zero order) of grating G1 and G2. Besides these numerical errors, small angle X-ray scattering is challenging for every experimental setup, especially if the accuracy of the alignment angles α_i and φ is important. Under grazing incidence, small variations in the angles of incidence could change the measured diffraction efficiencies rapidly. This angular sensitivity is demonstrated in Fig. <ref> e). The red dots are the simulated diffraction intensities (quasi-exact) of a typical lamellar silicon grating (grating G1) at α_i=0.7^∘ and E=8 keV. The changes in the diffraction intensities for angular variations in the order of 10^-3 degrees are visualized with dashed lines. These angular uncertainties are close to the achievable motor step resolution in our experimental chamber. This is in principle not a big issue for the reconstruction process, because a subsequent calibration of the angles is still possible. But this calibration is also limited by the uncertainty from the estimation of the pixel size for the PILATUS detector <cit.> and leads to a very similar uncertainty of 0.002^∘. To ensure an accurate reconstruction of the line shapes, both incidence angles must be included in the optimization process as independent parameters. This angular sensitivity in grazing incidence also influences the computational effort which is needed for an accurate simulation of measured diffraction intensities due to the divergence of the incident beam.The horizontal divergence of 0.01^∘ leads to an elongation of the diffraction peaks along the q_z axis. The impact is directly visible in Fig. <ref> a) for diffraction orders close to the horizon. To account for this angular distribution in the theoretical evaluation, we calculated 5 azimuthal incidence angles φ for every simulation. The extracted diffraction intensities were weighted with a Gaussian distribution and the standard deviation of the incident beam divergence. Convergence studies with increasing numbers of incidence angles φ (up to 50) reveal that the relative uncertainty σ_div of the sampling with only 5 angles is in the region of 8%.A further reduction of this uncertainty is possible but would result in a significantly increased computational effort. By using a photon counting detector, which follows Poisson statistics, for the measurements of diffraction intensities, the statistical measurement uncertainty σ_N is typically an order of magnitude below the numerical uncertainty of the Maxwell solver. However, the homogeneity of the PILATUS detector at incident photon energies around 5 keV <cit.> is not negligible, which leads to an additional uncertainty σ_hom of 2.5% in the measured diffraction intensities.The distribution of the incident photon energies were taken into account,with the implementation of a Gaussian prior, with the bandwidth of 10^-4 for each photon energy. For hard X-rays, the influence of uncertainties from the optical constants has almost vanished, especially for silicon far away (several keV) from any absorption edge. In the soft X-ray or EUV region, the situation could change dramatically and must be evaluated separately.§ RECONSTRUCTION OF THE LINE SHAPE The general problem in a reconstruction process is the question of how much information is necessary to obtain a univocal solution. The constraints for the numbers of diffraction orders for a specific incident angle and photon energy are dictated by the grating pitch. This strictly limits the accessible information and can only be compensated for by mapping different regions of the reciprocal space (e.g. tuning the incidence angles or photon energy). To avoid the experimental uncertainties during angular scans (α_i,φ), we utilize the high stability of the four crystal monochromator and tune only the incident photon energy. The intensity of the specular reflection is often disturbed by the mismatch between the beam spot size and the grating target size. Parts of the beam are reflected from the surrounding substrate and not from the structured surface. This distorts the zero order diffraction intensity and prevents the extraction of absolute intensities. Fitting the relative diffraction intensities of the non-zero orders is thus more accurate for sample sizes below the size of the elongated beam footprint. To compare the measured diffraction intensities with theoretical values, the Maxwell solver computes the near field solution for a specific parameter set depending on the model and extracts the theoretical photon flux for every diffraction order by post-processing using a Fourier transformation. The minimization functional of the optimization problem χ^2 is defined as the total sum of the least-squares functionals for every photon energy E,χ^2 = ∑_Eχ̃^2(E),where each of the functionals is defined asχ̃^2(E) = ∑_m( I_m^model(E)- I_m^meas(E))^2/σ^2(E).For the standard deviation σ (E), the individual experimental and numerical uncertainty is propagated according to the Gaussian error propagation law,σ^2(E) = σ_num^2(E) + σ_N^2 + σ_hom^2 + σ_div^2 .The numerical uncertainties σ_num(E) were precalculated for the different photon energies which were used in the measurements and for both gratings, as described in the previous section. The discretization length d and the polynomial degree p were chosen such that the numerical uncertainties were σ_num(E) ≈ 2.6% (grating G2) and σ_num(E) ≈3.6% (grating G1).A heuristic optimization method is ideally suited to explore a large parameter space of the inverse problem, but requires massive parallelization to reduce the computational effort to a reasonable time. With a well-optimized Maxwell solver and a commercially available workstation, we were able to solve 10^6 structures in less than one day. We used the particle swarm optimization <cit.> method, which ideally delivers the global minimum of the total χ^2 functional with an acceptable computational effort. However, no information about parameter sensitivity or the quantification of confidence intervals is delivered with a particle swarm optimization. To solve this issue, we applied an affine invariant Markov chain Monte Carlo sampling technique <cit.>. The likelihood is given byP(x⃗) ∝exp(-χ^2 / 2 ) ,where x⃗ is the set of parameters of the model. As a starting point, a random set of parameters was generated with respect to predefined boundaries around the global minimum of the particle swarm optimization. The confidence intervals and mean values were calculated by evaluating theprobability distribution for each parameter, after the MCMC procedure converged. The confidence intervals (± 1 σ) given here representpercentiles of the number of samples found in the interval defined by the upperand lower bounds.The results of the MCMC optimization are summarized in Fig. <ref> and Table <ref>. In Fig. <ref>, the extracted diffraction intensities for both gratings are shown as a function of the scattering vector component q_y and photon energy, and compared to the simulated diffraction intensities of the MCMC optimization. Diffraction orders which were influenced by the detector gaps (dark regions in the GISAXS images, see Fig. <ref>) were manually taken out of the reconstruction (missing points in Fig. <ref>). The good agreement between the measurement and simulation is evidence that the chosen model is able to describe the diffraction patterns from the grating structures. Also, the reconstructed geometry fits well to the expected nominal values. This is illustrated in Fig. <ref> by a comparison of the reconstructed line shape (c.f. red line) with cross-section SEM images which we obtained from witness samples.A direct comparison depends on the homogeneity of the grating sample, because the SEM images show only a rather small part of the grating in contrast to the GISAXS measurements, which capture the whole structured surface. However, the reconstructed grooves show a significant rounding which is expected from the masked ion beam etching process. This leads to well-defined corners on the top and more rounded structures inside the grooves. This behaviour corresponds well with the reconstructed line shapes of both gratings. A comparison of the reconstructed parameter values of both gratings and the estimated confidence intervals can be found in Table <ref>.The confidence intervals of the height are rather small with ± 120 pm as compared to expected process variations of the etching. One could argue that the estimated confidence intervals correspond to the well-defined mean value of the grating line height over the large illuminated beam footprint area. The measured diffraction intensities can be understood as superposition of the diffraction patterns from several different gratings with slightly different parameters due to stochastic roughness or other systematic deviations in the etching process. This effect was accounted for by the inclusion of the Debye-Waller damping in our model. It should, however, be mentioned that this Debye-Waller damping approach may not hold for any kind of process variation as it was derived for line edge roughness in a binary grating model. In particular, the line height roughness does not disturb the two-dimensional periodicity of the surface, as line edge roughness does, and might have a different impact. The estimated confidence intervals are therefore only valid within the framework of the model presented here. For a complete uncertainty evaluation in GISAXS, the impact of these modelling assumptions must be further investigated. § SUMMARYWe investigated lamellar gratings etched in silicon with pitch sizes of 100 nm and 150 nm using GISAXS in conical orientation. The very distinct diffraction patterns from the rather similar grating line shapes highlight the high sensitivity of GISAXS to structure details. A dominant superstructure in the diffraction patterns is correlated to artefacts of the e-beam writing process and may in future be exploited to further tune the lithography process. The conical diffraction geometry combined with grazing incidence angles allows the use of the finite-element approach with a Maxwell solver in the X-ray spectral range. The Maxwell solver in conjunction with the finite-element method is a versatile tool to solve the inverse problem for highly periodic structures because it allows arbitrary geometric models of the line shape to be parameterized. A detailed reconstruction of the line shape is possible due to a significant reduction of the computational effort by adapting the discretization length to the grazing angle of incidence. It reveals geometric details, such as strong rounding inside the grooves, which are often not accessible with other direct non-destructive measurement methods like AFM. A first parameter sensitivity evaluation based on the Markov chain Monte Carlo method is presented.§ ACKNOWLEDGMENTWe thank the European Metrology Research Programme (EMRP) for financial support within the Joint Research Project IND17 "Scatterometry". The EMRPis jointly funded by the EMRP participating countries within EURAMET and the European Union. apsrev4-1
http://arxiv.org/abs/1704.08032v4
{ "authors": [ "Victor Soltwisch", "Analia Fernandez Herrero", "Mika Pflüger", "Anton Haase", "Jürgen Probst", "Christian Laubis", "Michael Krumrey", "Frank Scholze" ], "categories": [ "physics.comp-ph", "cond-mat.mes-hall" ], "primary_category": "physics.comp-ph", "published": "20170426093112", "title": "Reconstructing Detailed Line Profiles of Lamellar Gratings from GISAXS Patterns with a Maxwell Solver" }
1.0
http://arxiv.org/abs/1705.00001v1
{ "authors": [ "Don Bunk", "Jay Hubisz", "Bithika Jain" ], "categories": [ "hep-ph", "astro-ph.CO", "hep-th" ], "primary_category": "hep-ph", "published": "20170427180013", "title": "A Perturbative RS I Cosmological Phase Transition" }
[email protected] Innovative Scientific Solutions, Inc., Dayton, OH, 45459, USA Innovative Scientific Solutions, Inc., Dayton, OH, 45459, USADepartment of Astronomy and Astrophysics, University of Chicago, Chicago, IL, 60637, USA Innovative Scientific Solutions, Inc., Dayton, OH, 45459, USA Intense Energy Solutions, LLC., Plain City, OH, 43064, USA Air Force Research Laboratory, Dayton, OH, 45433, USA Advances in ultra-intense laser technology are enabling, for the first time, relativistic intensities at mid-infrared (mid-IR) wavelengths. Anticipating further experimental research in this domain, we present high-resolution two dimensional Particle-in-Cell (PIC) simulation results using the Large-Scale Plasma (LSP) code that explore intense mid-IR laser interactions with dense targets. We present the results of thirty PIC simulations over a wide range of intensities (0.03 < a_0 < 39) and wavelengths (λ =780 nm, 3 /, and 10 /). Earlier studies, limited to λ =780 nm and a_0 ∼ 1 <cit.>, identified super-ponderomotive electron acceleration in the laser specular direction for normal-incidence laser interactions with dense targets. We extend this research to mid-IR wavelengths and find a more general result that normal-incidence super-ponderomotive electron acceleration occurs provided that the laser intensity is not highly relativistic (a_0 ≲ 1) and that the pre-plasma scale length is similar to or longer than the laser wavelength. Under these conditions, ejected electron angular and energy distributions are similar to expectations from an analytic model used in <cit.>. We also find that, for a_0 ∼ 1, the mid-IR simulations exhibit a classic ponderomotive steepening pattern with multiple peaks in the ion and electron density distribution. Experimental validation of this basic laser-plasma interaction process will be possible in the near future using mid-IR laser technology and interferometry.Particle-in-Cell Simulations of Electron Beam Production from Infrared Ultra-intense Laser Interactions W. M. Roquemore December 30, 2023 =======================================================================================================§ INTRODUCTIONWhile advances in laser technology have allowed ultra-intense laser interactions at near-IR wavelengths to be thoroughly explored, and it is only more recently that ultra-intense laser interactions at mid-IR wavelengths have become experimentally possible <cit.>. A variety of groups are beginning to examine what may be learned from experiments at these wavelengths and how phenomena observed in the near-IR may scale to longer wavelengths <cit.>. Some of this interest stems from the existence of atmospheric “windows" in the mid-IR <cit.>, while other groups consider how the longer length scale of mid-IR interactions allows subtle phenomena to be more easily probed <cit.>. Another interesting value of intense mid-IR interactions is in examining the physics of laser damage <cit.>.To the best of our knowledge, despite recent interest in mid-IR ultra-intense laser interactions, the literature has not focused much attention on intense mid-IR laser interactions with dense (i.e. solid or liquid density) targets. These interactions are interesting for a variety of reasons, among them the favorable scaling of the ponderomotive electron energy with laser wavelength (a_0 ∼√(I λ^2)∼λ). However, given the complexity of ultra-intense interactions with dense targets, these scaling arguments can only offer an order-of-magnitude expectation for the results of detailed simulations and experiments in this regime. With experimental capabilities still maturing in the mid-IR, the present work offers a simulation survey of energetic electron ejection from mid-IR laser irradiated dense targets.The work presented here is motivated in part by earlier investigations of normal-incidence ultra-intense laser interactions with liquid targets at the Air Force Research Lab which found much larger than expected conversion efficiencies from laser energy to ejected electron energy <cit.>. These experimental results prompted simulation work by <cit.>. <cit.> presented / Particle-in-Cell (PIC) simulations showing significant electron ejection at superponderomotive energies and emphasized that ultra-intense laser interactions at the ∼ 10^18 W cm^-2 (a_0 ∼ 1 for λ≈ 800 nm) intensities present in the experiment should create strong standing-wave fields near the target. <cit.> performed 3D PIC simulations of these targets and provided an analytic model to explain both the energies and angular distribution of ejected electrons. More recently, <cit.> have reported direct experimental measurements of the ejected electron energies, confirming the existence of multi-MeV electrons in the experiment and from this reinforcing conclusions that the conversion efficiency in the experiment is large compared to other ultra-intense laser experiments. An interesting question, then, is whether superponderomotive electron ejection occurs even with intense mid-IR laser interactions.The other motivator for this project is the plan to purchase and upgrade an intense 3 / wavelength laser system at the Air Force Research Laboratory.The upgraded laser system will be able to produce ∼1 mJ scale laser pulsesand a peak intensity near 10^17 W/cm^2.We explore a much wider range of laser energies and intensities in an effort to examine the physics of mid-IR laser interactions with dense targets.Sec. <ref> describes our simulation setup. Sec. <ref> describes our results. Finally, Sec. <ref> provides a summary and conclusions. x[1]>m#1 § PARTICLE-IN-CELL SIMULATIONSWe performed 30 different high-resolution / PIC simulations with the LSP code <cit.>. For all simulations the initial conditions included a liquid-density water slab target with some assumed pre-plasma scale length similar to earlier studies <cit.>. In all simulations, a laser is normally incident onto the water slab. We use the following Cartesian coordinate system for these simulations: the positive x-axis is the direction of the laser, the y-axis is the polarization direction, and z-axis is the axis of the water column, which is assumed to be the axis of symmetry in the / PIC simulations. The simulations involved a normally incident, spatially Gaussian, sine-squared envelope pulse with 780 nm, 3 /, and 10 / wavelengths (denoted λ). These simulations extend the results of earlier investigations with 780 nm laser pulses <cit.> by examining the same phenomena with long infrared(IR) wavelengths. For convenience we will often refer to the set of all simulations performed with a particular laser wavelength incident by saying, for example, “the 3 / simulations”, and likewise for the other wavelengths. All the 780 nm simulations had a laser pulse with a 2.15 / Gaussian radius and a 40 fs temporal full-width-half-maximum (FWHM) pulse duration (similar to the laser system described in <cit.>). The 3 / simulations have laser pulses with 8.25 / Gaussian radius and 158 fs FWHM pulse duration, and the 10 / simulations had laser pulses with 27.5 / Gaussian radii and 510 fs. These Gaussian radii and pulse durations were chosen so that the ratio of the wavelength to Gaussian radius and the number of optical periods in a pulse were fixed across all simulations regardless of laser wavelength. For each wavelength we simulated a range of pulse energies from 10^-4 J, 1 mJ , 10 mJ, 1 J, to 10 J. Since the Gaussian radius and pulse duration were fixed for each wavelength this was done by changing the intensity. Simulation parameters for each wavelength simulation are summarized in Table <ref>.The target in all simulations consisted of free electrons, protons, and O^+ ions, with number densities set in relative proportion to make the target match water's chemical composition and to ensure charge neutrality (O^+ to p^+ toe^- ratio of 1:2:3). In all simulations, the plasma density only varied along the x direction. In the “target” region the density is constant and in the “pre-plasma” region the density profile is exponentially decreasing in x away from the target region.For every intensity and wavelength considered we perform a simulation with a 1.5 / scale-length, for direct comparison with previous studies <cit.> which employed a such a scale length pre-plasma. For 3 / and 10 / wavelengths, we also perform simulations where the pre-plasma scale length is a constant multiple of the wavelength (L = 1.92 λ), so that we performed 3 / simulations with a 5.77 / scale length and 10 / simulations with a 19.2 / scale-length. We refer to these scale lengths are the “scaled” scale lengths. The 780 nm and 3 / simulations had a target region electron density of 10^23 cm^-3, which with the O^+ to p^+ toe^- ratio mentioned earlier correspondent to the mass density of 1 g/cm^3 as one would expect for liquid water. The 10 / simulations with a 1.5 / scale length also had a target region with this same 10^23 cm^-3 electron density into the target. However, the 10 / simulations with a 19.2 / scale-length had a target region electron density of 10^21 cm^-3 in order to reduce the size of the target in order to reduce computational requirements. The 780 nm simulations used a target that was 20 / wide in the y direction, the 3 / simulations used a 100 / wide target, and the 10 / simulations used a 220 / wide target. All targets had a initial temperature of 1 eV. All simulations had absorbing boundaries with 10 / between the initial target and the simulation boundaries. The classical formula for the critical electron density is n_c = 4π m_e ω^2/e^2 = m_e / πλ^2 e^2. For the 780 nm, 3 /, and 10 / wavelengths, this corresponds to n_c=1.74×10^21 cm^-3, 1.24×10^20 cm^-3, and 1.11×10^19 cm^-3 respectively. In all simulations, the laser focus was chosen to coincide with the critical density in the pre-plasma layer. For the 780 nm simulations, a spatial resolution with spacing of 33 nm (roughly 23 cells per wavelength) was used and timesteps of 0.1 fs was used. For the 3 / simulations a spacing of 100 nm and temporal resolution of 0.15 fs was used. Finally, the 10 / simulations utilized a spatial resolution of 250 nm and temporal resolution of 0.5 fs. Although these simulations do not resolve the Debye length in every cell (since there are cells with near-solid densities with sub-nanometer Debye lengths), the phenomena of interest is electron acceleration in the underdense pre-plasma extending from the target where the Debye length is much larger and more easily resolved. The implicit algorithm in LSP avoids grid-heating issues associated with the Debeye instability so that the behavior of near-solid density regions in the simulation does not ruin the overall energy conservation of the simulation.All simulations had 27 macro-particles per cell per species (free electrons, protons, and O^+ ions). As in earlier work <cit.>, the O^+ ions in this simulation can be further ionized by strong electric fields according to the Ammisov-Delone-Krainov rate <cit.>. In the simulation electron macroparticles scatter by a Monte-Carlo algorithm as in <cit.> with a scattering rate determined by a Spitzer model <cit.> except at low temperatures where the scattering rate is limited by the timestep. The 1.5 / scale-length simulations were run for three times the duration of the simulated laser pulse (i.e. three times the full-width full max duration of the pulse). The scaled simulations were run for 3.5 times the duration of the simulated laser pulse because these were larger targets with a more extended pre-plasma.The number of pulse energies investigated, the various scale lengths assumed with the three laser wavelengths add up to a total of 30 / simulations. The parameters of all simulations are listed in the appendix, and parameters common across given wavelength simulations are summarized in Table  <ref>. § RESULTS §.§ Ejected Electron Energies Fig. <ref> shows the energy spectra of back-accelerated electrons for all the simulations performed. The left panel of Fig. <ref> shows the results of the simulations with a 1.5 / scale-length pre-plasma, while the right panel plots the results of the scaled scale-length simulations where the scale-length is proportional to the wavelength (L=1.92λ). These figures plot the mean and maximum energy of escaping electrons on the y-axis as a function of the normalized vector potential a_0 of the incident laser. Here, a_0 = e E_0 / m_e ω c, with the peak electric field value of the incident pulse is denoted E_0, c is the speed of light, m_e is the mass of an electron, and ω = 2 π c /λ is the angular frequency of the laser beam.The mean electron energy is determined by taking the average energy of electrons that reach the boundary of the simulation. Since we are concerned here with back directed electrons only those electrons with a momentum angle within ±75^∘ of the incident laser are counted.These results are found to scale with the Wilks scaling estimate from <cit.>,E_ wilks = [ √(1+1/2a_0^2)-1] m_e c^2.While there are a number of other formulae that describe the typical energy of electrons interacting with an intense laser field, we choose to compare with Wilks scaling because it is an analytically motivated formula that is reasonably representative of the various scaling models in this regime <cit.>. The Wilks model also reduces to the classical ponderomotive energy of an electron in a monochromatic plane wave in the low a_0 limit. A binomial approximation yieldsE_ wilks≈1/4 m_ec^2 a_0^2for small values of a_0. This is why, on Fig.<ref>, one sees an a_0^2 dependence for Eq. <ref> at low a_0 that transitions to linear dependence (i.e. ∼ a_0) for a_0 ≳ 1.Comparing Eq. <ref> to the simulation data in the left panel of Fig. <ref> yields an interesting result that the 10 / wavelength simulations fall closest to the Wilks scaling model prediction. The 3 / and 780 nm wavelength simulations lie significantly above the prediction, especially for low a_0 values. The 780 nm wavelength simulations have the most energetic electrons, exceeding Eq. <ref> by 1-2 orders of magnitude. Thus we say that the ejected electrons in the 780 nm simulations are highly "superponderomotive".As identified earlier, the right panel of Fig. <ref> shows the results of the "scaled" simulations. The 780 nm simulations with 1.5 / scale-length appear in both panels of Fig. <ref>, but the longer wavelength simulations shown in the right panel of Fig. <ref> all assume a longer scale length than in the left panel. Remarkably, in the right panel the results from all three wavelengths seem to follow roughly the same trend and exceed the Wilks prediction by 1-2 orders of magnitude.Fig. <ref> presents detailed information on the energies and ejection angles of electrons that leave the simulation volume. Fig. <ref> shows results from the three different wavelengths, highlighting intensities with a_0 ∼ 1 and the mid-IR simulations with L = 1.92 λ. We also overplot with solid lines the results from an analytic model described in <cit.>. This model considers that the back-directed electrons are ejected at high speed into a pulsed plane wave that approximates the reflected laser pulse. Because of the similarity to earlier work in <cit.>, it is unsurprising that the model compares favorably to the 780 nm results shown in Fig. <ref>. What is more remarkable is that the model predictions compare similarly well to the mid-IR simulations.§.§ Electron density profiles Fig. <ref> provides snapshots of the electron density in the simulation after the reflection of the laser (not more than two pulse-lengths in time later) but while electrons are still moving away from the target. The figure demonstrates that the onset of plasma wave phenomena is dependent on the a_0 value of the incident laser pulse, and not only the laser wavelength or laser intensity independently.For a_0<0.2 for all wavelengths, the escaping electrons are not spatially bunched, and the pre-plasma layer is perturbed less relative to the more intense cases. With a_0 values between 0.2 and 0.4, the backwards accelerated electrons break apart the pre-plasma layer as they escape. At a_0 values near ∼0.5 and above, ejected electrons exhibit bunching, a pattern that becomes more pronounced as a_0 becomes larger. As discussed in <cit.>, this arises because electrons are only deflected away from the target during two specific moments during the laser cycle. In these simulations, it has been observed that the onset of these bunches is preceded in the process of reflection by a significant electron density hole created by the ponderomotive force, which manifest only as a_0 nears and exceeds unity. As a_0 exceeds 2.0, as shown in the top two plots, the laser begins to penetrate beyond the (non-relativistic) critical density surface due to relativistic transparency. Hole-boring <cit.> does not occur in this case due to the ultrashort pulse in all cases, and the laser pulse begins to penetrate only when the last few cycles of the main pulse are present on the non-relativistic critical density surface.To comment on another aspect of the plots in Fig. <ref>, in essentially all of the plots shown (0.06 < a_0 < 4) the laser ionizes the target, moving the critical density (white contour) towards the incoming laser, especially along the laser axis. Because the laser intensity decreases away from the laser axis according to a gaussian spatial profile, this causes the critical density to assume a curved shape as seen in the figure. §.§ Ponderomotive SteepeningWhile fig. <ref> demonstrates that the onset of plasma wave phenomena is determined by the a_0 value of the incident laser, it also demonstrates that at a given a_0, plasma features observed scale in physical size with the incident laser wavelength. This finding can be used to scale the physical size the laser plasma interaction to facilitate experimental observation of phenomena which would not be observable with short wavelength pulses. One such phenomenon is ponderomotive steepening, a well-known laser-plasma interaction process where the radiation pressure from the laser modifies the electron density profile, which, over time, will substantially modify the ion density profile <cit.> on the scale of half the wavelength. A related process is "hole boring" which has been studied theoretically and experimentally <cit.>. Unlike hole boring, ponderomotive steepening, as originally described by <cit.>, involves a series of peaks in the ion density profile. Ponderomotive steepening has been observed in a number of PIC simulations in the literature <cit.> but to the best of our knowledge it has never been experimentally observed given the size of such features would be on the sub micron scale. To be able to see this phenomenon, one needs an intense laser and high spatial resolution interferometry of the target that can resolve half-wavelength-scale features. Mid-IR lasers therefore significantly relax the spatial resolution requirements of such an experiment compared to lasers in the near-IR. With this in mind we looked for ponderomotive steepening in our simulations and found that both the 3 / wavelength simulation with a_0 = 0.6 and L = 5.77 / and the 10 / wavelength simulation with a_0 = 1.09 and L = 19.2 / do exhibit a classic ponderomotive steepening pattern with multiple peaks in both the ion and electron density as shown in Fig. <ref>. Ponderomotive steepening was noticed earlier in 800 nm wavelength simulations presented in <cit.>. Ponderomotive steepening was also present in simulations that we published in <cit.>, although for brevity we did not highlight this result. Fig. <ref> provides essentially the first compelling evidence that this effect should persist in mid-IR experiments of this kind.§ DISCUSSIONWe present 30 high-resolution PIC simulations, which comprise a parameter scan over laser wavelengths, intensities, and target scale-lengths, designed to explore the physics of intense, normally-incident near-IR (780 nm wavelength) and mid-IR (3 / and 10 / wavelength) laser plasma interactions in the creation of superponderomotive elections. The simulations support three major findings. The first major conclusion is that the backwards acceleration of elections in the sub-relativisitic regime is more efficient when the scale length is longer than the wavelength, expressed by the condition that the ratio of the scale length to the wavelength L/λ≳ 1. Second, the onset of plasma phenomenon scales with the a_0 value of the incident pulse, which of course takes contributions from both the intensity of the incident light and its wavelength. Finally, the physical scale of plasma phenomena scales with the wavelength, facilitating the experimental observation of features such as ponderomotive steepening.The importance of the condition L/λ≳ 1 is found from comparing the energies of near-IR and mid-IR simulations with similar a_0 values and scale lengths determined by L = 1.92 λ, shown on the right panel of Fig. <ref> to simulations with similar a_0 values on the left panel with a fixed scale length L=1.5 /. In the sub-relativistic regime (a_0 < 1), we find that the “scaled” scale length simulations with L = 1.92 λ produce ejected electrons that happen to lie well above expectations from <cit.> and that they also exceed that estimate by a greater amount than the corresponding fixed L=1.5 / simulations (with similar a_0 value lasers) exceed that estimate. Moreover, from examining just the simulations with a fixed L=1.5 /, one finds that longer wavelength simulations produce less energetic electrons than shorter wavelength simulations. The longer wavelength cases represent values of L/λ = 0.5, 0.15 for λ =3 /,10 /, respectively. This further reinforces the idea that the L/λ ratio needs to be unity or greater for back-directed electron acceleration to be effective in this a_0 regime.The wilks scaling estimate is linear in a_0 in the regime where a_0 > 1 and quadratic in a_0 in the non-relativistic case (a_0 < 1), as discussed earlier. The scaling of energies for backwards accelerated elections for the simulations with an extended pre-plasma L/λ = 1.92appears to have a linear power law that extends below the non-relativistic limit at a_0 ∼ 1 and thus exceed the classical ponderomotive scaling in the sub-relativistic regime, demonstrated by the bifurcation of the Wilks scaling and the trend for backwards-accelerated elections at a_0 ∼ 1 on the right panel of Fig. <ref>. As mentioned in the caption of that figure, the lowest a_0 simulation did not have accelerated electrons that reached the edge of the simulation space by the end of the simulation, so it is doubtful that this power law extends to a_0 values well below 10^-1, but this lower energy regime will be the focus of future work.Regarding the ejected electron energy and angle spectra shown in Fig. <ref>, the analytic model actually corresponds more closely to the 1.5 / scale length results for the mid-IR simulations (bottom panel) than the L = 1.92 λ results (upper panel) which are quite a bit more energetic than expected from the model. A careful look at the upper and lower panels of Fig. <ref> plot shows that there is a larger total number of ejected electrons as well for the L = 1.92 λ simulations. As explained in <cit.>, this model is purely electromagnetic, without considering plasma effects which become more important as the ratio L/λ increases.The importance of the ratio L/λ could be due to a number of factors. It is well known that intense laser interactions are highly sensitive to the assumed scale length; from a physics perspective <cit.> one expects that if L/λ≪ 1, then the laser will reflect off a sharp interface (much like a mirror) and only accelerate electrons that reside on the surface of the target. If instead L / λ≳ 1, then the laser will interact with a more extended region of near-critical plasma that provides a more suitable environment for accelerating large numbers of electrons in the back direction, as we consider here. It should also be emphasized that the “scaled” scale lengthtargets are significantly larger than the L = 1.5 / targets and this extended pre-plasma can decrease the electrostatic potential at the edge of the target where the electrons are ejected. A large pre-plasma layer can also provide a larger return current for escaping elections. These are possible explanation for both the increased energy for the simulations where L=1.92λ as well as for the larger number of ejected electrons, as discussed earlier and shown in Fig. <ref>.Finally, the ratio L/λ is relevant especially in the sub-relativistic regime (a_0 < 1), and as discussed in Section <ref> and Fig. <ref>, different plasma phenomena is observed different across a_0 regimes. These effects, as well as the confluence of the a_0 value with the ratio L/λ, will be investigated in future work. Finally, as discussed in the previous section and shown in Fig. <ref>, we anticipated the scaling of plasma features with the indicident laser wavelength, and observed ponderomotive steepening in the longer wavelength IR simulations with a_0 ∼ 1 with multiple peaks in the density distribution. While it is out of the scope of this paper to design in detail a mid-IR experiment that would create these conditions and convincingly detect these density modulations using interferometry, the result is very encouraging from an experimental perspective. We comment here to say (1) that high-resolution interferometric systems have been demonstrated with soft X-ray wavelengths and used productively for research <cit.> and (2) high repetition rate laser experiments with rapidly recovering, highly-reproducible liquid targets can potentially be used with a high acquisition rate interferometric system <cit.> to study how these features develop over time. To be clear, this is not a comment on a particular laser system, but rather an invitation for experimental groups to consider the problem. It remains beyond the scope of this paper to determine how capable the the high repetition rate 3 / wavelength ultra-intense laser system recently purchased by AFRL would be for detecting these density modulations when used with the existing 100 Hz acquisition rate interferometric system <cit.> and coupled to the existing liquid target setup there.§ CONCLUSIONSIn anticipation of future experiments utilizing ultra-intense, mid-infrared laser pulses and their interaction with dense targets, we used LSP 2D(3v) simulations to explore these interactions over a range of intensities and wavelengths. Similar to earlier investigations with near-IR light <cit.>, we find that intense longer IR wavelength interactions also produce highly superponderomotive electrons. Moreover, the acceleration is much more effective when the pre-plasma scale length is in similar scale to the laser wavelength, or longer. In some cases the typical ejected electron energies exceed ponderomotive expectations by orders of magnitude.The longer IR simulations also indicate that ponderomotive steepening should occur in experiments of this kind when a_0 ∼ 1 and the pre-plasma scale length is again similar in scale to the laser wavelength. This likewise extends earlier results in the near-IR <cit.> where this phenomena was noticed in simulations. Importantly, ponderomotive steepening can create multiple peaks and valleys in the ion and electron density profile in the pre-plasma that are well known to be spaced by λ/2 peak-to-peak. To our knowledge these density modulations have never been observed experimentally. Intense longer IR laser systems coupled with high-resolution interferometry techniques should provide a promising venue for demonstrating this basic laser-plasma interaction process.This research was sponsored by the Air Force Office of Scientific Research (AFOSR) through program managers Dr. Enrique Parra and Dr. Jean-Luc Cambier. The authors acknowledge significant support from the Department of Defense High Performance Computing Modernization Program (DOD HPCMP) Internship Program and the AFOSR summer faculty program. Supercomputer time was used on the DOD HPC Armstrong and Garnet supercomputers. The authors would also like to thank The Ohio State Department of Physics Information Technology support, specifically, Keith A. Stewart. apsrev § EXHAUSTIVE LIST OF SIMULATIONS The following is an exhaustive list of the 2D(3v) PIC simulations presented in this paper.
http://arxiv.org/abs/1704.08118v1
{ "authors": [ "Gregory K. Ngirmang", "Chris Orban", "Scott Feister", "John T. Morrison", "Enam A. Chowdhury", "W. M. Roquemore" ], "categories": [ "physics.plasm-ph" ], "primary_category": "physics.plasm-ph", "published": "20170426134823", "title": "Particle-in-Cell Simulations of Electron Beam Production from Infrared Ultra-intense Laser Interactions" }
theoremTheorem[section] =23cm=16cmequationsection figuresection tablesection ∂α#1#2#1/#2 γλ #1#2∂#1/∂#2 A_j+1/2,k+1/2exampleExample[section] lemmaLemma[section] remarkRemark[section] equationsection figuresection tablesection proof[1][Proof]#1 , [email protected]@pku.edu.cn HEDPS, CAPT & LMAM, School of Mathematical Sciences, Peking University, Beijing 100871, P.R. China label2]Huazhong Tang [label2]Corresponding author. Tel: +86-10-62757018; Fax: +86-10-62751801. [email protected] HEDPS, CAPT & LMAM, School of Mathematical Sciences, Peking University, Beijing 100871, P.R. China; School of Mathematics and Computational Science,Xiangtan University, Hunan Province, Xiangtan 411105, P.R. ChinaSecond-order accurate genuine BGK schemes for the ultra-relativistic flow simulations [ December 30, 2023 ===================================================================================== This paper presents second-order accurate genuine BGK (Bhatnagar-Gross-Krook) schemes in the framework of finite volume method for the ultra-relativistic flows. Different from the existing kinetic flux-vector splitting (KFVS) or BGK-type schemes forthe ultra-relativistic Euler equations, thepresent genuineBGK schemes are derived from the analytical solution of the Anderson-Witting model, which is given for the first time and includes the “genuine” particle collisions in the gas transport process. The BGK schemes for the ultra-relativistic viscous flows are also developed and two examples of ultra-relativistic viscous flow are designed.Several 1D and 2D numerical experiments are conducted to demonstrate that the proposed BGK schemes not only are accurate and stable in simulatingultra-relativistic inviscid and viscous flows, but also have higher resolution at the contact discontinuitythan the KFVS or BGK-type schemes. BGK scheme, Anderson-Witting model, ultra-relativistic Euler equations, ultra-relativistic Navier-Stokes equations § INTRODUCTION Relativistic hydrodynamics (RHD) arise in astrophysics, nuclear physics, plasma physics and other fields. In many radiation hydrodynamics problems of astrophysical interest, the fluid moves at extremely high velocities near the speed of light, and relativistic effects become important. Examples of such flows are supernova explosions, the cosmic expansion, and solar flares.The relativistic hydrodynamical equations are highly nonlinear, making the analytic treatment of practical problems extremely difficult. The numerical simulation is the primary and powerful way to study and understand the relativistic hydrodynamics. This work will mainly focus on the numerical methods for the special RHDs, wherethere is no strong gravitational field involved. The pioneering numerical work may date back to the finite difference code via artificial viscosityfor the spherically symmetric general RHD equations in the Lagrangian coordinate <cit.> andthe finite difference method with the artificial viscosity techniquefor the multi-dimensional RHD equations in the Eulerian coordinate <cit.>.Since 1990s, the numerical study of the RHDs began to attract considerable attention, and various modern shock-capturing methods with an exact or approximate Riemann solver have been developed for the RHD equations. Some examples are the local characteristic approach <cit.>, the two-shock approximation solvers <cit.>,the Roe solver <cit.>, the flux corrected transport method <cit.>, the flux-splitting method based on the spectral decomposition <cit.>, the piecewise parabolic method <cit.>, the HLL (Harten-Lax-van Leer) method <cit.>, the HLLC (Harten-Lax-van Leer-Contact) method <cit.> and the Steger-Warming flux vector splittingmethod <cit.>.The analytical solution of theRiemann problem in relativistic hydrodynamics was studied in<cit.>.Some other higher-order accurate methods have also been well studied in the literature, e.g.the ENO (essentially non-oscillatory) and weighted ENO methods <cit.>, the discontinuous Galerkin (DG) method <cit.>, the adaptive moving mesh methods <cit.>, the Runge-Kutta DG methods with WENO limiter <cit.>, the direct Eulerian GRP schemes <cit.>, and the local evolution Galerkin method <cit.>. Recently some physical-constraints-preserving (PCP) schemes were developed for the special RHDequations. They are the high-order accurate PCP finite difference weighted essentiallynon-oscillatory (WENO) schemes and discontinuous Galerkin (DG) methods proposedin <cit.>. The readers are also referred to the early review articles <cit.> as well as references therein. The gas-kinetic schemes present a gas evolution process from a kinetic scale to a hydrodynamic scale, where both inviscid and viscous fluxes are recovered from moments of a single time-dependent gas distribution function <cit.>. The development of gas-kinetic schemes, such as the kinetic flux vector splitting(KFVS) and Bhatnagar-Gross-Krook (BGK) schemes, has attracted much attention and significant progress has been made in the non-relativistic hydrodynamics. They utilize the well-known connection that the macroscopic governing equations are the moments of the Boltzmann equation whenever the distribution function is at equilibrium. The KFVS schemes are constructed by applying upwind technique directly to the collisionless Boltzmann equation, see e.g. <cit.>. Due to the lack of collision in the numerical flux calculations, the KFVS schemes smear the solutions, especially the contact discontinuity. To overcome this problem, the BGK schemes are constructed by taking into account the particle collisions in the whole gas evolution process within a time step, see e.g. <cit.>. Moreover, due to their specific derivation,they are also able to present the accurate Navier-Stokes solution in the smooth flow regime and have favorable shock capturing capability in the shock region.Thekineticbeamscheme was first proposed for the relativistic gas dynamics in <cit.>. After that, the kinetic schemes forthe ultra-relativistic Euler equationswere developed in<cit.>.The BGK-typeschemes<cit.> were extended to the ultra-relativistic Euler equationsin <cit.> in order to reduce the numerical dissipation.Those kinetic schemes resulted directly from the moments of the relativistic Jüttner equilibrium distribution without including the “genuine" particle collisions in the gas transport process.This paper will developsecond-order genuine BGK schemes for the ultra-relativistic inviscid and viscous flow simulations. It is organized as follows. Section <ref> introduces the special relativistic Boltzmann equation and discusseshow to recover some macroscopic quantities from the kinetic theory. Section <ref> presents the ultra-relativistic hydrodynamical equations throughthe Chapman-Enskog expansion. Section <ref>develops second-order accurate genuine BGK schemes for the 1D and 2D ultra-relativistic Euler equations and 2D ultra-relativisticNavier-Stokes equations. Section <ref> gives several numerical experiments to demonstrate accuracy, robustness and effectiveness ofthe proposed schemes in simulating inviscid and viscous ultra-relativistic fluid flows.Section <ref> concludes the paper.§ PRELIMINARIES AND NOTATIONS In the special relativistic kinetic theory of gases <cit.>, a microscopic gas particle is characterized by the four-dimensional space-time coordinates (x^α) = (x^0,x⃗) and four-momentum vectors (p^α) = (p^0,p⃗), where x^0 = ct, c denotes the speed of light in vacuum, t and x⃗ are the time and 3D spatial coordinates, respectively, and the Greek index αruns from 0, 1, 2, 3.Besides the contravariant notation (e.g. p^α), the covariant notation such as p_α will also be used in the following,while both notations p^α and p_α are related byp_α = g_αβp^β, p^α = g^αβp_β,where the Einstein summation convention over repeated indices has been used, (g^αβ) is the Minkowski space-time metric tensor andchosen as (g^αβ) = diag{1, -1, -1, -1}, while (g_αβ) denotes the inverse of (g^αβ).For a free relativistic particle,the relativistic energy-momentum relation (aka “on-shell” or “mass-shell” condition) E^2-|p⃗|^2 c^2=m^2 c^4 holds,where m denotes the mass of each structure-less particle which is assumed to be the same for all particles.The “mass-shell” condition can be rewritten as p^αp_α=m^2c^2 if putting p^0= c^-1E=√(|p⃗|^2+m^2c^2), which becomesp^0=|p⃗| in the ultra-relativistic limit, i.e. m→0.Similar to the non-relativistic case, the relativistic Boltzmann equation describes the evolution of one-particle distribution function f(x⃗,t,p⃗)in the phase space spanned by the space-time coordinates x^α and momentum p^α of particles. It readsp^α∂ f/∂ x^α = Q(f,f),where Q(f,f) denotes the collision term and depends on the product of distribution functions of two particles at collision. In the literature, there exist several simple collision models. The Anderson-Witting model <cit.>p^α∂ f/∂ x^α = -U_α p^α/τ c^2(f-g),is similar to the BGK model in the non-relativistic kinetic theory and will be considered in this paper, whereτ is the relaxation time, in the Landau-Lifshitz frame, the hydrodynamic four-velocitiesU_α are definedbyU_βT^αβ = ε g^αβU_α,which implies that (ε, U_α) is a generalized characteristic pair of (T^αβ,g^αβ), ε and T^αβ are the energy density andenergy-momentum tensor, respectively, and g=g(x⃗,t,p⃗) denotes the distribution function at the local thermodynamic equilibrium, the so-called Jüttner equilibrium (or relativistic Maxwellian) distribution. In the ultra-relativistic case,it becomes <cit.>g= n c^3/8π k^3T^3exp(-U_αp^α/kT)= n c^3/8π k^3T^3exp(-|p⃗|/kT(U_0-∑_i=1^3U_ip^i/|p⃗|)),where n and T denotethe number density and thermodynamic temperature, respectively,and k is the Boltzmann's constant. The Anderson-Witting model (<ref>) can tend to the BGK model in the non-relativistic limit and the collision term -U_α p^α/τ c^2(f-g) satisfies the following identities∫_ℝ^3U_α p^α/τ c^2(f-g)Ψd^3p⃗/p^0 = 0,Ψ⃗ = (1, p^i, p^0)^T,which imply the conservation ofparticle number, momentum and energy∂_αN^α = 0,∂_βT^αβ = 0,where the particle four-flow N^α and the energy-momentum tensor T^αβ are related to the distribution f byN^α = c∫_ℝ^3p^αfd^3p⃗/p^0, T^αβ=c∫_ℝ^3p^αp^βfd^3p⃗/p^0. In the Landau-Lifshitz decomposition, both N^α and T^αβ are rewrittenas followsN^α = n U^α + n^α,T^αβ = c^-2ε U^αU^β - Δ^αβ(p+) + π^αβ,where Δ^αβ is defined byΔ^αβ = g^αβ - 1/c^2U^αU^β, satisfying Δ^αβ U_β = 0, the number density n, particle-diffusion current n^α, energy density ε, and shear-stress tensor π^αβcan be calculated byn= 1/c^2 U_αN^α = 1/c∫_ℝ^3Efd^3p⃗/p^0,n^α = Δ^α_βN^β = c∫_ℝ^3p^<α>fd^3p⃗/p^0,ε = 1/c^2U_αU_βT^αβ = 1/c∫_ℝ^3E^2fd^3p⃗/p^0,π^αβ = Δ^αβ_μνT^μν = c∫_ℝ^3p^<αβ>fd^3p⃗/p^0,and the sum of thermodynamic pressure p and bulk viscous pressureisp += -1/3Δ_αβT^αβ = 1/3c∫_ℝ^3(E^2-m^2c^4)fd^3p⃗/p^0.Here E=U_αp^α, p^<α>=Δ^α_γp^γ, p^<αβ>=Δ^αβ_γδp^γp^δ, andΔ^αβ_μν=1/2(Δ^α_μΔ^β_ν + Δ^β_μΔ^α_ν-2/3Δ_μνΔ^αβ). The quantities n^α,, and π^αβ become zero at the local thermodynamic equilibrium f=g.The following gives a general recovery procedure of the admissible primitive variables n, u⃗, and T from the nonnegative distribution f(x⃗, t, p⃗), where u⃗ is the macroscopic velocity in the (x^i) space.Such recovery procedure will be useful inour BGK scheme.For any nonnegativedistribution f(x⃗,t,p⃗) which is not always be zero, thenumber density n, velocity u⃗ and temperature Tcan be uniquely obtained as follows: * T^αβ is positive definite and (T^αβ,g^αβ) has only one positive generalized eigenvalue, i.e. the energy density ε,andU_α is corresponding generalized eigenvectorsatisfying U_0=√(U^2_1+U^2_2+U^2_3+c^2). Thus, the macroscopic velocity u⃗ can be calculated by u⃗=-c(U^-1_0U_1,U^-1_0U_2,U^-1_0U_3)^T, satisfying|u⃗|<c and(U_α) = (γ c,-γu⃗), (U^α) = (γ c,γu⃗),where γ=(1-c^-2|u⃗|^2)^-1/2 denotes the Lorentz factor.* The number density n is calculated byn= c^-2 U_αN^α>0. * The temperature T solves the nonlinear algebraic equationε =n m c^2(G(ζ)-ζ^-1), where ζ =m c^2/kT, G(ζ)=K_3(ζ)/K_2(ζ), and K_ν(ζ)ismodified Bessel function of the second kind, defined byK_ν(ζ):=∫_0^∞cosh(νϑ)exp(-ζcoshϑ)dϑ, ν≥ 0.In the ultra-relativistic case, K_2(ζ) and K_3(ζ) reduce to 2/ζ^2 and 8/ζ^3, respectively, so that one has G(ζ)=4/ζ, and thenε = 3k nT. * Since the nonnegative distribution f(x⃗,t,p⃗)is not identically zero,using the relation (<ref>) givesX⃗^TT^αβX⃗ = cX⃗^T∫_ℝ^3p^αp^βfd^3p⃗/p^0X⃗=c∫_ℝ^3x_αp^αp^βx_βfd^3p⃗/p^0= c∫_ℝ^3(x_αp^α)^2fd^3p⃗/p^0>0,for any nonzero vector X⃗=(x_0,x_1,x_2,x_3)^T∈ℝ^4. Thus, the matrix T^αβ is positive definite.Thanks to g^αβ = diag{1,-1,-1,-1} and (<ref>), the matrix-pair (T^αβ,g^αβ) has an unique positive generalized eigenvalue ε,satisfying0<U_αT^αβU_β = ε U_αg^αβU_β,which implies U^2_0>U^2_1+U^2_2+U^2_3. Thus, one can obtain U_0=√(U^2_1+U^2_2+U^2_3+c^2)via multiplying (U_α) by a scaling constant c(U^2_0-U^2_1-U^2_2-U^2_3)^-1/2. As a result, the macroscopic velocity u⃗ can be calculated by u⃗=-c(U^-1_0U_1,U^-1_0U_2,U^-1_0U_3)^T, satisfying|u⃗| = cU^-1_0√(U^2_1+U^2_2+U^2_3)<c.* For U_i∈ℝ and p^i∈ℝ, i=1,2,3,using the Cauchy-Schwarz inequality givesU⃗·p⃗ ⩽|U⃗||p⃗| < √(U^2_1+U^2_2+U^2_3+c^2)· p^0 =U_0p^0,which implies E=U_αp^α>0. Thusone hasn= 1/c^2 U_αN^α = 1/c∫_ℝ^3Efd^3p⃗/p^0>0.* It is obvious that the positive temperature T can be obtained from(<ref>). § ULTRA-RELATIVISTIC HYDRODYNAMIC EQUATIONS This section givesthe ultra-relativistic hydrodynamic equations, which can be derived from the Anderson-Witting model by using the Chapman-Enskog expansion. For the sake of convenience,units in which the speed of light and the Boltzmann's constant are equal to one will be used here and hereafter.§.§ Euler equations In the ultra-relativistic limit, the macroscopic variables n, ε, p are related to gbyn= ∫_ℝ^3Egd^3p⃗/|p⃗|,ε = ∫_ℝ^3E^2gd^3p⃗/|p⃗|=3 nT,p=1/3∫_ℝ^3E^2gd^3p⃗/|p⃗|= 1/3ε.If taking the zero order Chapman-Enskog expansionf=g and using the conclusion in Remark <ref>,theultra-relativistic Euler equations are derived as follows∂W⃗/∂ t + ∑^3_k=1∂F⃗^k(W⃗)/∂ x^k = 0,whereW⃗ = (N^0, T^0i, T^00)^T= (nU^0,nh U^0U^i,nh U^0U^0 - p )^T,andF⃗^⃗k⃗(⃗W⃗)⃗ = ( N^k,T^ki,T^k0)^T= (nU^k,nh U^kU^i + pδ^ik,nh U^kU^0)^T.Here i=1,2,3 and h=4T denotes the specific enthalpy. For the given conservative vectorW⃗, one can get the primitive variables n , U^k and p by <cit.>p= 1/3(-T^00+√(4(T^00)^2-3∑^3_i=1(T^0i)^2)),U^i= T^0i/√(4p(p+T^00)),n = N^0/√(1+∑_i=1^3(U^i)^2),i=1,2,3.§.§ Navier-Stokes equationsIf taking the first order Chapman-Enskog expansionf=g(1-τ/U_αp^αφ),withφ = -p_αp_β/T∇^<αU^β> + p_α/T^2(U_βp^β-h)(∇^αT-T/ nh∇^αp), where∇^α = Δ^αβ∂_β and∇^<αU^β> = Δ^αβ_γδ∇^γU^δ, then (<ref>), (<ref>) and (<ref>) given^α = -λ/h(∇^αT-T/ nh∇^αp),π^αβ = 2μ∇^<αU^β>,= 0 ,where λ=4/3Tpτ and μ=4/5pτ. Based on those, the ultra-relativistic Navier-Stokes equations see <cit.> can be obtainedas follows∂W⃗/∂ t + ∑^3_k=1∂F⃗^k(W⃗)/∂ x^k = 0,whereW⃗ = [N^0; T^0i; T^00 ]= [ nU^0 - λ/h(∇^0 T-T/ nh∇^0p);nh U^0U^i + 2μ∇^<0U^i>;nh U^0U^0 - p + 2μ∇^<0U^0> ],andF⃗^⃗k⃗(⃗W⃗)⃗ = [N^k; T^ki; T^k0 ]= [nU^k - λ/h(∇^k T-T/ nh∇^kp); nh U^kU^i + pδ^ik + 2μ∇^<kU^i>; nh U^kU^0 + 2μ∇^<kU^0> ].It shows that one cannot recover the values of primitive variables n, u⃗ and T only from the givenconservative vectorW⃗. In practice, the values ofn, u⃗ and T have to be recovered from the givenW⃗and F⃗^⃗k⃗(⃗W⃗)⃗ or N^α and T^αβ byusing Theorem <ref>. § NUMERICAL SCHEMES This section develops second-order accurate genuine BGK schemes for the 1D and 2D ultra-relativistic Euler and Navier-Stokes equations.The BGK schemes are derived from the analytical solution of the Anderson-Witting model (<ref>), which is given for the first time and includes the “genuine” particle collisions in the gas transport process. §.§ 1DEuler equationsConsider the 1D ultra-relativistic Euler equations with u⃗=(u,0,0)^T as∂W⃗/∂ t + ∂F⃗(W⃗)/∂ x = 0,whereW⃗ = (nU^0,nh U^0U^1,nh U^0U^0 - p )^T,F⃗(⃗W⃗)⃗ = (nU^1,nh U^1U^1 + p,nh U^0U^1)^T. It is strictly hyperbolic because there are three real and distinct eigenvalues of the Jacobian matrix A(W⃗)=∂F⃗/∂W⃗<cit.>λ_1=u(1-c_s^2)-c_s(1-u^2)/1-u^2c_s^2,λ_2=u,λ_3=u(1-c_s^2)+c_s(1-u^2)/1-u^2c_s^2,where c_s=1/√(3) is the speed of sound. Divide the spatial domain into a uniform mesh with the step size Δ x and the jth cellI_j = (x_j-1/2, x_j+1/2),where x_j+1/2 = 1/2(x_j+x_j+1) and x_j = jΔ x, j∈ℤ. The time interval [0,T] is also divided into a (non-uniform) mesh {t_n+1=t_n+Δ t_n, t_0=0, n⩾0}, wherethe step size Δ t_n is determined byΔ t_n = CΔ x/max_j ϱ̅_j, the constant C denotes the CFL number, and ϱ̅_j denotes a suitable approximation of the spectral radius of A(W⃗) within the cell I_j. For the given approximate cell-average values {W⃗̅⃗^n_j}, i.e.W⃗̅⃗^n_j ≈1/Δ x∫_I_jW⃗(x,t_n)dx, reconstruct apiecewise linear functionas followsW⃗_h(x,t_n)=∑W⃗^n_j(x)χ_j(x), W⃗^n_j(x):=W⃗̅⃗^n_j + W⃗^n,x_j(x-x_j),where W⃗^n,x_j is the approximate slope in the cell I_j obtained by using some slope limiter and χ_j(x) denotes the characteristic function of I_j. In the 1D case,theAnderson-Witting model (<ref>) reduces top^0∂ f/∂ t + p^1∂ f/∂ x = U_αp^α/τ(g-f),whose analytical solution is given byf(x,t,p⃗) =∫_0^tg(x',t',p⃗)exp(-∫_t'^tU_α(x”,t”)p^α/p^0τdt”) U_α(x',t')p^α/p^0τdt'+exp(-∫_0^tU_α(x',t')p^α/τ p^0dt')f_0(x-v_1t,p⃗),where v_1=p^1/p^0 is the velocity of particle in x direction, x'=x-v_1(t-t') andx”=x-v_1(t-t”) are the particle trajectories, and f_0 is the initial particle velocity distribution function, i.e. f(x,0,p⃗)=f_0(x,p⃗). Taking the moments of (<ref>) and integrating them over the space-time cell I_j×[t_n,t_n+1) yield∫_t_n^t_n+1∫_I_j∫_ℝ^3Ψ⃗(p^0∂ f/∂ t + p^1∂ f/∂ x -U_αp^α/τ(g-f))d dxdt=0,where d = d^3p⃗/|p⃗|. Using the conservation constraints (<ref>) gives∫_I_j ∫_ℝ^3Ψ⃗ p^0f(x,t_n+1,p⃗)d dx =∫_I_j∫_ℝ^3Ψ⃗ p^0f(x,t_n,p⃗)d dx - ∫_t_n^t_n+1∫_ℝ^3Ψ⃗ p^1(f(x_j+1/2,t,p⃗) - f(x_j-1/2,t,p⃗)) d dxdt,which is the starting point of our 1D second-order accurate BGK scheme. Ifreplacingthe distribution f(x_j±1/2,t,p⃗)in (<ref>) with an approximate distribution f̂(x_j±1/2,t,p⃗), then one gets the following finite volume schemeW⃗̅⃗^n+1_j = W⃗̅⃗^n_j - Δ t_n/Δ x(F̂⃗̂^n_j+1/2 - F̂⃗̂^n_j-1/2),where the numerical flux F̂⃗̂^n_j+1/2 is given byF̂⃗̂^n_j+1/2=1/Δ t_n∫_t_n^t_n+1∫_ℝ^3Ψ⃗ p^1f̂(x_j+1/2,t,p⃗)d dxdt,withf̂(x_j+1/2,t,p⃗) =∫_t_n^tg_h(x',t',p⃗)exp(-∫_t'^tU_α(x”,t”)p^α/p^0τdt”) U_α(x',t')p^α/p^0τdt'+exp(-∫_t_n^tU_α(x',t')p^α/p^0τdt')f_h,0(x_j+1/2-v_1(t-t_n),p⃗),here v_1=p^1/p^0, x'=x_j+1/2-v_1(t-t'), x”=x_j+1/2-v_1(t-t”), f_h,0(x_j+1/2-v_1(t-t_n),p⃗)≈ f_0(x_j+1/2-v_1(t-t_n),p⃗) and g_h(x',t',p⃗)≈ g(x',t',p⃗). It is worth noting that it is very expensive to get U_α(x”,t”) and U_α(x',t') at the right hand side of (<ref>). In practice, U_α(x”,t”) and U_α(x',t') in the first term may be approximatedasU_α,j+1/2^n while U_α(x',t')in the second term may be simplified as U_α,j+1/2,L^n or U_α,j+1/2,R^n depending on the sign of v_1 and will be given in Section <ref>.The remaining tasks are to derive theapproximate initial velocity distribution function f_h,0(x_j+1/2-v_1(t-t_n),p⃗)and equilibrium velocity distribution function g_h(x',t',p⃗). §.§.§ Equilibrium distribution g_0 at the point (x_j+1/2,t_n) At the cell interface x=x_j+1/2, (<ref>) gives the following left and rightlimiting valuesW⃗^n_j+1/2,L := W⃗_h(x_j+1/2-0,t_n)=W⃗^n_j(x_j+1/2), W⃗^n_j+1/2,R := W⃗_h(x_j+1/2+0,t_n)=W⃗^n_j+1(x_j+1/2), W⃗^n,x_j+1/2,L := dW⃗_h/dx(x_j+1/2-0,t_n)=dW⃗^n_j/dx(x_j+1/2), W⃗^n,x_j+1/2,R := dW⃗_h/dx(x_j+1/2+0,t_n)=dW⃗^n_j+1/dx(x_j+1/2). Using (<ref>), W⃗^n_j+1/2,L and W⃗^n_j+1/2,Rgives the Jüttner distributions at the left and rightof cell interface x=x_j+1/2 as followsg_L = n _j+1/2,L/8π T_j+1/2,L^3e^-U_α,j+1/2,Lp^α/T_j+1/2,L, g_R = n _j+1/2,R/8π T_j+1/2,R^3e^-U_α,j+1/2,Rp^α/T_j+1/2,R,and the particle four-flow N^α and the energy-momentum tensor T^αβ at the point (x_j+1/2,t_n)(N^0,T^01,T^00)_j+1/2^n,T :=∫_ℝ^3∩p^1>0Ψ⃗p^0g_LdΞ+∫_ℝ^3∩p^1<0Ψ⃗p^0g_RdΞ, (N^1,T^11,T^01)_j+1/2^n,T :=∫_ℝ^3∩p^1>0Ψ⃗p^1g_LdΞ+∫_ℝ^3∩p^1<0Ψ⃗p^1g_RdΞ.Using those and Theorem <ref> calculates the macroscopic quantities n ^n_j+1/2, T^n_j+1/2, and U_α,j+1/2^n, and then gives the Jüttner distribution function at the point (x_j+1/2,t_n) as followsg_0 = n ^n_j+1/2/8π (T^n_j+1/2)^3exp(-U_α,j+1/2^np^α/T^n_j+1/2),which will be used to derive the equilibrium velocity distribution g_h(x,t,p⃗), see Section <ref>. §.§.§ Initial distribution function f_h,0(x,p⃗) Assuming that f(x,t,p⃗) and g(x,t,p⃗) are sufficiently smooth and borrowing the idea in the Chapman-Enskog expansion, f(x,t,p⃗) is supposed to beexpanded as followsf(x,t,p⃗)=g-τ/U_α p^α(p^0 g_t+p^1g_x)+O(τ^2)=:g(1-τ/U_α p^α(p^0 A+p^1 a))+O(τ^2),withA=A_1+A_2p^1+A_3p^0,a=a_1+a_2p^1+a_3p^0.The conservation constraints (<ref>)give the constraints on A and a∫_ℝ^3Ψ⃗(p^0A+p^1a)gdΞ=∫_ℝ^3Ψ⃗(p^0g_t+p^1g_x)dΞ =1/τ∫_ℝ^3Ψ⃗U_αp^α(g-f)dΞ=0.Setting t=t_n and using (<ref>) and the Taylor series expansion of f(x,t,p⃗) with respect to x from both sides of the cell interface x=x_j+1/2givethe following approximate initial non-equilibrium distribution functionf_h,0(x,t_n,p⃗):={ g_L(1-τ/U_α,Lp^α(p^0A_L+p^1a_L)+a_Lx̃), x̃<0,g_R(1-τ/U_α,Rp^α(p^0A_R+p^1a_R)+a_Rx̃), x̃>0, .where x̃=x-x_j+1/2, g_L and g_R are given in (<ref>), (a_L,A_L) and (a_R,A_R) are considered as the left and right limits of (a,A) at the cell interface x=x_j+1/2 respectively. The slopes a_L and a_R come from the spatial derivative of Jüttner distribution and have unique correspondences with the slopes of the conservative variables W⃗ by <a_ωp^0>=W⃗_j+1/2,ω^n,x,where<a_ω>:=∫_ℝ^3a_ωg_ωΨ⃗dΞ,ω=L,R. Those correspondences formthe linear system for the unknow a⃗_ω:=(a_ω,1, a_ω,2, a_ω,3)^TM_0^ωa⃗_ω=W⃗_j+1/2,ω^n,x,wherethe coefficient matrix M_0^ω is given byM_0^ω=∫_ℝ^3p^0g_ωΨ⃗Ψ⃗^TdΞ,ω=L, R.Using the conservation constraints (<ref>) anda_ωgives the linear system for A_ωas follows<a_ωp^1+A_ωp^0>=0,which can be cast into the following formM_0^ωA⃗_ω =-M_1^ωa⃗_ω,withM_1^ω=∫_ℝ^3g_ωp^1Ψ⃗Ψ⃗^TdΞ,ω=L,R. The rest is to calculate all elements of M_0 and M_1, whose superscript L or R has been omitted for the sake of convenience. In the ultra-relativistic limit, those can be exactly gotten.Because p^0=|p⃗|, the triple integrals in M_0 and M_1 can be simplified by using polar coordinate transformation p^1 = |p⃗|ξ,p^2=|p⃗|√(1-ξ^2)sinφ, p^3=|p⃗|√(1-ξ^2)cosφ,ξ∈[-1,1], φ∈[-π,π],which implies dΞ=|p⃗|d|p⃗|dξ dφ.In fact, the abovetransformation can convert the triple integrals in the matrices M_0 and M_1into a single integral with respect to |p⃗| and a double integral withrespect to ξ and φ. On the other hand, in the 1D case, the integrands donot depend on the variable φ, so the double integral can further reduce to a single integral with respect to ξ which can be exactly calculated.Those lead toM_0 =∫_ℝ^3p^0gΨ⃗Ψ⃗^TdΞ =[∫^1_-1Φ(x,ξ)dξ ∫^1_-1ξΨ(x,ξ)dξ∫^1_-1Ψ(x,ξ)dξ; ∫^1_-1ξΨ(x,ξ)dξ ∫^1_-1ξ^2Υ(x,ξ)dξ ∫^1_-1ξΥ(x,ξ)dξ;∫^1_-1Ψ(x,ξ)dξ ∫^1_-1ξΥ(x,ξ)dξ∫^1_-1Υ(x,ξ)dξ; ]=[ nU^04 nT U^1U^0nT(4U^1U^1+3);4 nT U^1U^04 nT^2 U^0(6U^1U^1+1)4 nT^2 U^1(6U^1U^1+5);nT(4U^1U^1+3)4 nT^2 U^1(6U^1U^1+5) 12 nT^2 U^0(2U^1U^1 + 1) ],andM_1 =∫_ℝ^3p^1gΨ⃗Ψ⃗^TdΞ =[ ∫^1_-1ξΦ(x,ξ)dξ ∫^1_-1ξ^2Ψ(x,ξ)dξ ∫^1_-1ξΨ(x,ξ)dξ; ∫^1_-1ξ^2Ψ(x,ξ)dξ ∫^1_-1ξ^3Υ(x,ξ)dξ ∫^1_-1ξ^2Υ(x,ξ)dξ; ∫^1_-1ξΨ(x,ξ)dξ ∫^1_-1ξ^2Υ(x,ξ)dξ ∫^1_-1ξΥ(x,ξ)dξ; ]=[nU^1nT (4U^1U^1+1) 4 nT U^1U^0;nT (4U^1U^1+1)12 nT^2 U^1(2U^1U^1+1) 4 nT^2 U^0(6U^1U^1+1); 4 nT U^1U^0 4 nT^2 U^0(6U^1U^1+1) 4 nT^2 U^1(6U^1U^1 + 5) ],whereΦ(x,ξ)=1/2 n (x)/(U^0(x)-ξ U^1(x))^3, Ψ(x,ξ)=3/2( nT)(x)/(U^0(x)-ξ U^1(x))^4, Υ(x,ξ)= 6( nT^2)(x)/(U^0(x)-ξ U^1(x))^5. §.§.§ Equilibrium velocity distribution g_h(x,t,p⃗) Using W⃗_0:=W⃗_j+1/2^n derived in Section <ref> and the approximate cell average values W⃗̅⃗_j+1 and W⃗̅⃗_jreconstructs a cell-vertex based linear polynomial around the cell interface x=x_j+1/2 as followsW⃗_0(x)=W⃗_0+W⃗_0^x(x-x_j+1/2),where W⃗_0^x=1/Δ x(W⃗̅⃗_j+1-W⃗̅⃗_j). Again the Taylor series expansion of g at the cell interface x=x_j+1/2 givesg_h(x,t,p⃗)=g_0(1+a_0(x-x_j+1/2)+A_0(t-t_n)),where (a_0,A_0) are the values of (a,A) at the point (x_j+1/2,t_n). Similarly, the slope a_0 comes from the spatial derivative of Jüttner distribution and hasa unique correspondence with the slope of the conservative variables W⃗ by<a_0 p^0>=W⃗_0^x,and then the conservation constraints and a_0 gives the following linear system<A_0p^0+a_0p^1>=0.Those can be rewritten asM_0^0a⃗_0=W⃗_0^x, M_0^0A⃗_0=- M_1^0a⃗_0,where a⃗_0=(a_0,1,a_0,2, a_0,3)^T, A⃗_0=(A_0,1,A_0,2, A_0,3)^T, andM_0^0 and M_1^0can be calculated by (<ref>) and (<ref>) with n, T and U^α instead ofn ^n_j+1/2, T^n_j+1/2 and U^n,α_j+1/2. Those systems can be solved by using thesubroutine for (<ref>) and (<ref>).Up to now, all parameters in the initial gas distribution function f_h,0 and the equilibrium state g_h have been determined. Substituting(<ref>) and (<ref>) into (<ref>) gives our distribution function f̂ at a cell interface x=x_j+1/2 as follows f̂(x_j+1/2,t,p⃗)=g_0(1-exp(-U_α,j+1/2^np^α/p^0τ(t-t_n)))+g_0a_0v_1((t-t_n+p^0τ/U_α,j+1/2^n p^α)exp(-U_α,j+1/2^np^α/p^0τ(t-t_n))-p^0τ/U_α,j+1/2^np^α)+g_0A_0((t-t_n)-p^0τ/U_α,j+1/2^np^α(1-exp(-U_α,j+1/2^np^α/p^0τ( t-t_n))))+H[v_1]g_L(1-τ/U_α,j+1/2,L^np^α(p^0A_L+p^1a_L)-a_Lv_1(t-t_n)) exp(-U_α,j+1/2,L^np^α/p^0τ(t-t_n)) +(1-H[v_1])g_R(1-τ/U_α,j+1/2,R^np^α (p^0A_R+p^1a_R)-a_Rv_1(t-t_n))exp(-U_α,j+1/2,R^np^α/p^0τ(t-t_n)), where H[x] is the Heaviside function defined byH[x]= 0,x<0,1,x⩾0.Finally, substituting (<ref>) into the integral (<ref>) yields the numerical flux F̂⃗̂^n_j+1/2. §.§ 2D Euler equationsThis section extends the above BGK scheme to the 2D ultra-relativistic Euler equations∂W⃗/∂ t + ∂F⃗(W⃗)/∂ x + ∂G⃗(W⃗)/∂ y= 0,whereW⃗ = [nU^0; nh U^0U^1; nh U^0U^2; nh U^0U^0 - p ], F⃗(W⃗) = [nU^1; nh U^1U^1 + p; nh U^2U^1; nh U^0U^1 ], G⃗(W⃗) = [nU^2; nh U^1U^2; nh U^2U^2 + p; nh U^0U^2 ],with h=4T, p= nT, and u⃗ = (u_1, u_2, 0)^T. Four real eigenvalues of the Jacobian matrix A_1(W⃗)=∂F⃗/∂W⃗ and A_2(W⃗)=∂G⃗/∂W⃗ can be given as followsλ_k^(1) =u_k(1-c_s^2)-c_sγ(u⃗)√(1-u_k^2-(|u⃗|^2-u_k^2)c_s^2)/1-|u⃗|^2c_s^2, λ_k^(2) =λ_k^(3)=u_k, λ_k^(4) =u_k(1-c_s^2)+c_sγ(u⃗)√(1-u_k^2 -(|u⃗|^2-u_k^2)c_s^2)/1-|u⃗|^2c_s^2,where k=1,2, and c_s = 1/√(3) is the speed of sound.Divide the spatial domain Ω into a rectangular mesh with the cell I_i,j={(x,y)|x_i-1/2<x<x_i+1/2, y_j-1/2<y<y_j+1/2}, where x_i+1/2 = 1/2(x_i+x_i+1), y_j+1/2=1/2(y_j+y_j+1),x_i=iΔ x, y_j = jΔ y, and i,j∈ℤ. The time interval [0,T] is also partitioned into a (non-uniform) mesh t_n+1=t_n+Δ t_n, t_0=0, n⩾0, wherethe time step size Δ t_n is determined byΔ t_n = C min{Δ x, Δ y}/max_ij{ϱ̅^1_i,j,ϱ̅^2_i,j}, the constant C denotes the CFL number, and ϱ̅^k_i,j denotes theapproximation of the spectral radius of A_k(W⃗) over the cellI_i,j, k=1,2.The2D Anderson-Witting model becomesp^0∂ f/∂ t + p^1∂ f/∂ x + p^2∂ f/∂ y= U_αp^α/τ(g-f),whose analytical solution can be given byf(x,y,t,p⃗)= ∫_0^tg(x',y', t',p⃗)exp(-∫_t'^tU_α(x”,y”,t”)p^α/p^0τdt”) U_α(x',y',t')p^α/p^0τdt'+exp(-∫_0^tU_α(x',y',t')p^α/τ p^0dt')f_0(x-v_1t,y-v_2t,p⃗),where v_1=p^1/p^0 and v_2=p^2/p^0 are the particle velocities in xand y directions respectively, {x'=x-v_1(t-t'), y'=y-v_2(t-t')}and{x”=x-v_1(t-t”), y”=y-v_2(t-t”)} are the particle trajectories, and f_0(x,y,p⃗) is the initial particle velocity distribution function, i.e. f(x,y,0,p⃗)=f_0(x,y,p⃗).Taking the moments of (<ref>) and integrating them overI_i,j×[t_n,t_n+1) yield the 2D finite volume schemeW⃗̅⃗^n+1_i,j = W⃗̅⃗^n_i,j - Δ t_n/Δ x(F̂⃗̂^n_i+1/2,j - F̂⃗̂^n_i-1/2,j) - Δ t_n/Δ y(Ĝ⃗̂^n_i,j+1/2 - Ĝ⃗̂^n_i,j-1/2),where W⃗̅⃗^n_i,j is the cell average approximation of conservative vector W⃗(x,y,t) over the cell I_i,j at time t_n, i.e.W⃗̅⃗^n_i,j≈1/Δ xΔ y∫_I_i,jW⃗(x,y,t_n)dxdy,andF̂⃗̂^n_i+1/2,j =1/Δ t_n∫_t_n^t_n+1∫_ℝ^3Ψ⃗ p^1f̂(x_i+1/2,y_j,t,p⃗)ddt, Ĝ⃗̂^n_i,j+1/2 =1/Δ t_n∫_t_n^t_n+1∫_ℝ^3Ψ⃗ p^2f̂(x_i,y_j+1/2,t,p⃗)ddt,where f̂(x_i+1/2,y_j,t,p⃗)≈ f(x_i+1/2,y_j,t,p⃗) and f̂(x_i,y_j+1/2,t,p⃗)≈ f(x_i,y_j+1/2,t,p⃗).Because the derivation of f̂(x_i,y_j+1/2,t,p⃗) is very similar to f̂(x_i+1/2,y_j, t,p⃗), we will mainly derive f̂(x_i+1/2,y_j, t,p⃗) with the help of (<ref>) as follows f̂(x_i+1/2,y_j,t,p⃗)= ∫_t_n^tg_h(x',y', t',p⃗)exp(-∫_t'^tU_α(x”,y”,t”)p^α/p^0τdt”) U_α(x',y',t')p^α/p^0τdt'+exp(-∫_t_n^tU_α(x',y',t')p^α/τ p^0dt')f_h,0(x_i+1/2-v_1t̃,y_j-v_2t̃,p⃗),where t̃=t-t_n, x'=x_i+1/2-v_1(t-t'), y'=y_j-v_2(t-t') and x”=x_i+1/2-v_1(t-t”), y”=y_j-v_2(t-t”), and f_h,0(x_i+1/2,y_j-v_1t̃,p⃗) and g_h(x',y',t',p⃗) are (approximate) initial distribution function and equilibrium velocity distribution function, respectively, which will be presented in the following. Similarly, in order to avoid expensive cost ingetting U_α(x”,y”,t”) or U_α(x',y',t') along the particle trajectory, U_α(x”,y”,t”) and U_α(x',y',t') in(<ref>) may be taken as a constant U_α,i+1/2,j^n, andU_α(x',y',t') in the second term may be replaced with U_α,i+1/2,j,L^n or U_α,i+1/2,j,R^n which is given in Section <ref>.§.§.§ Equilibrium distribution g_0 at the point (x_i+1/2,y_j, t_n) Using the cell average values {W⃗̅⃗^n_i,j} reconstructs apiecewise linear functionW⃗_h(x,y,t_n)=∑_i,jW⃗^n_i,j(x,y)χ_i,j(x,y),where W⃗^n_i,j(x,y):=W⃗̅⃗^n_i,j + W⃗^n,x_i,j(x-x_i) + W⃗^n,y_i,j(y-y_j), W⃗^n,x_i,j and W⃗^n,y_i,j are the x- and y-slopes in the cell I_i,j, respectively, and χ_i,j(x,y) is the characteristic function of the cell I_i,j.At the point (x_i+1/2,y_j), the left and rightlimiting values of W⃗_h(x,y,t_n) are given byW⃗^n_i+1/2,j,L := W⃗_h(x_i+1/2-0,y_j,t_n)=W⃗^n_i,j(x_i+1/2,y_j), W⃗^n_i+1/2,j,R := W⃗_h(x_j+1/2+0,y_j,t_n)=W⃗^n_i+1,j(x_i+1/2,y_j), W⃗^n,x_i+1/2,j,L := dW⃗_h/dx(x_i+1/2-0,y_j,t_n)=dW⃗^n_i,j/dx(x_i+1/2,y_j), W⃗^n,x_i+1/2,j,R := dW⃗_h/dx(x_i+1/2+0,y_j,t_n)=dW⃗^n_i+1,j/dx(x_i+1/2,y_j), W⃗^n,y_i+1/2,j,L := dW⃗_h/dy(x_i+1/2-0,y_j,t_n)=dW⃗^n_i,j/dy(x_i+1/2,y_j), W⃗^n,y_i+1/2,j,R := dW⃗_h/dy(x_i+1/2+0,y_j,t_n)=dW⃗^n_i+1,j/dy(x_i+1/2,y_j).Similar to the 1D case, with the help of W⃗^n_i+1/2,j,L, W⃗^n_i+1/2,j,R and Jüttner distribution (<ref>), one can get g_L and g_R at (x_i+1/2,y_j,t_n). Then the particle four-flow N^α and the energy-momentum tensor T^αβ at (x_j+1/2,y_j,t_n) can be defined by(N^0,T^01,T^02,T^00)_i+1/2,j^n,T:=∫_ℝ^3∩p^1>0Ψ⃗p^0g_LdΞ+∫_ℝ^3∩p^1<0Ψ⃗p^0g_RdΞ,(N^1,T^11,T^21,T^01)_i+1/2,j^n,T:=∫_ℝ^3∩p^1>0Ψ⃗p^1g_LdΞ+∫_ℝ^3∩p^1<0Ψ⃗p^1g_RdΞ,(N^2,T^12,T^22,T^02)_i+1/2,j^n,T:=∫_ℝ^3∩p^1>0Ψ⃗p^2g_LdΞ+∫_ℝ^3∩p^1<0Ψ⃗p^2g_RdΞ.Using those and Theorem <ref>, the macroscopic quantities n ^n_i+1/2,j, T^n_i+1/2,j and U_α,i+1/2,j^n can be calculated andthen the Jüttner distribution function g_0 at (x_i+1/2,y_j, t_n) is obtained.Similarly, in the y-direction, W⃗^n_i,j+1/2,L and W⃗^n_i,j+1/2,R can also be given by (<ref>) so that one has corresponding left and right equilibrium distributions g̃_L and g̃_R. The particle four-flow N^α and the energy-momentum tensor T^αβ at (x_i,y_j+1/2, t_n) are defined by(N^0,T^01,T^02,T^00)_i,j+1/2^n,T:=∫_ℝ^3∩p^2>0Ψ⃗p^0g̃_LdΞ+∫_ℝ^3∩p^2<0Ψ⃗p^0g̃_RdΞ,(N^1,T^11,T^21,T^01)_i,j+1/2^n,T:=∫_ℝ^3∩p^2>0Ψ⃗p^1g̃_LdΞ+∫_ℝ^3∩p^2<0Ψ⃗p^1g̃_RdΞ,(N^2,T^12,T^22,T^02)_i,j+1/2^n,T:=∫_ℝ^3∩p^2>0Ψ⃗p^2g̃_LdΞ+∫_ℝ^3∩p^2<0Ψ⃗p^2g̃_RdΞ,which give n ^n_i,j+1/2, T^n_i,j+1/2, U_α,i,j+1/2^n and g_0 at (x_i,y_j+1/2, t_n).The following willderive the initial distribution function f_h,0(x,y,p⃗) and equilibriumdistribution g_h(x,y,t,p⃗), separately.§.§.§ Initialdistribution function f_h,0(x,y,p⃗) Borrowing the idea in the Chapman-Enskog expansion, f(x,y,t,p⃗) is supposed to be of the form f(x,y,t,p⃗)=g-τ/U_α p^α(p^0g_t+p^1g_x+p^2g_y)+O(τ^2)=:g(1-τ/U_α p^α(p^0A+p^1a+p^2b))+O(τ^2).The conservation constraints (<ref>) imply the constraints on A,a and b∫_ℝ^3Ψ⃗(p^0A+p^1a+p^2b)gdΞ=∫_ℝ^3Ψ⃗(p^0g_t+p^1g_x+p^2g_y)dΞ =1/τ∫_ℝ^3Ψ⃗U_αp^α(g-f)dΞ=0.Using the Taylor series expansion of fat the cell interface (x_i+1/2,y_j) givesf_h,0={ g_L(1-τ/U_α,Lp^α(p^0A_L+p^1a_L+p^2b_L)+a_Lx̃ + b_Lỹ), x̃<0,g_R(1-τ/U_α,Rp^α(p^0A_R+p^1a_R+p^2b_L)+a_Rx̃ + b_Rỹ), x̃>0, .where x̃=x-x_i+1/2, ỹ=y-y_j, and(a_ω, b_ω, A_ω), ω=L,R, are of the forma_ω= a_ω,1+a_ω,2p^1+a_ω,3p^2+a_ω,4p^0, b_ω= b_ω,1+b_ω,2p^1+b_ω,3p^2+b_ω,4p^0, A_ω= A_ω,1+A_ω,2p^1+A_ω,3p^2+A_ω,4p^0. The slopes a_ω and b_ω come from the spatial derivative of Jüttner distribution and have unique correspondences with the slopes of the conservative variables W⃗ bythe following linear systems for a_ω and b_ω<a_ωp^0>=W⃗_i+1/2,j,ω^n,x, <b_ωp^0>=W⃗_i+1/2,j,ω^n,y,ω=L,R.Those linear systems can also be expressed as followsM_0^ωa⃗_ω=W⃗_i+1/2,j,ω^n,x, M_0^ωb⃗_ω=W⃗_i+1/2,j,ω^n,y,where the coefficient matrix is defined byM_0^ω=∫_ℝ^3p^0g_ωΨ⃗Ψ⃗^TdΞ.Substituting a_ω and b_ω into the conservation constraints (<ref>) gives the linear systems for A_ωas follows<a_ωp^1+b_ωp^2+A_ωp^0>=0, ω=L,R,which can be rewritten asM_0^ωA⃗_ω=-M_1^ωa⃗_ω-M_2^ωb⃗_ω,whereM_1^ω=∫_ℝ^3g_ωp^1Ψ⃗Ψ⃗^TdΞ, M_2^ω=∫_ℝ^3g_ωp^2Ψ⃗Ψ⃗^TdΞ,ω=L,R.All elements of thematrices M_0^ω, M_1^ω and M_2^ω can also be explicitly presented by using the coordinate transformation (<ref>).If omittingthe superscripts L and R, then the matrices M_0, M_1, and M_2 are M_0 =∫_ℝ^3p^0gΨ⃗Ψ⃗^TdΞ:=[ M^0_00 M^0_01 M^0_02 M^0_03; M^0_10 M^0_11 M^0_12 M^0_13; M^0_20 M^0_21 M^0_22 M^0_23; M^0_30 M^0_31 M^0_32 M^0_33 ]=[∫^π_-π∫^1_-1Φ dξ dφ ∫^π_-π∫^1_-1w^1Ψ dξ dφ ∫^π_-π∫^1_-1w^2Ψ dξ dφ∫^π_-π∫^1_-1Ψ dξ dφ; ∫^π_-π∫^1_-1w^1Ψ dξ dφ ∫^π_-π∫^1_-1(w^1)^2Υ dξ dφ∫^π_-π∫^1_-1w^1w^2Υ dξ dφ ∫^π_-π∫^1_-1w^1Υ dξ dφ; ∫^π_-π∫^1_-1w^2Ψ dξ dφ∫^π_-π∫^1_-1w^2w^1Υ dξ dφ ∫^π_-π∫^1_-1(w^2)^2Υ dξ dφ ∫^π_-π∫^1_-1w^2Υ dξ dφ;∫^π_-π∫^1_-1Ψ dξ dφ ∫^π_-π∫^1_-1w^1Υ dξ dφ ∫^π_-π∫^1_-1w^2Υ dξ dφ∫^π_-π∫^1_-1Υ dξ dφ ]= [ nU^0 4 nTU^1U^0 4 nTU^2U^0 nT (4 U^1U^1 + 4 U^2U^2 + 3);4 nTU^1 U^0 4 nT^2 (6U^1U^1 + 1) U^0 24 nT^2U^1 U^2 U^0 4 nT^2 U^1 (6 U^1U^1 + 6 U^2U^2 + 5);4 nTU^2 U^024 nT^2 U^1 U^2 U^0 4 nT^2(6 U^2U^2 + 1) U^04 nT^2 U^2 (6 U^1U^1 + 6 U^2U^2+ 5); M^0_03 M^0_13 M^0_23 12 nT^2 U^0 (2 U^1U^1 + 2U^2U^2 + 1) ], M_1 =∫_ℝ^3p^1gΨ⃗Ψ⃗^TdΞ:=[ M^1_00 M^1_01 M^1_02 M^1_03; M^1_10 M^1_11 M^1_12 M^1_13; M^1_20 M^1_21 M^1_22 M^1_23; M^1_30 M^1_31 M^1_32 M^1_33 ]= [∫^π_-π∫^1_-1w^1Φ dξ dφ∫^π_-π∫^1_-1(w^1)^2Ψ dξ dφ ∫^π_-π∫^1_-1w^1w^2Ψ dξ dφ∫^π_-π∫^1_-1w^1Ψ dξ dφ;∫^π_-π∫^1_-1(w^1)^2Ψ dξ dφ∫^π_-π∫^1_-1(w^1)^3Υ dξ dφ ∫^π_-π∫^1_-1(w^1)^2w^2Υ dξ dφ∫^π_-π∫^1_-1(w^1)^2Υ dξ dφ; ∫^π_-π∫^1_-1w^1w^2Ψ dξ dφ ∫^π_-π∫^1_-1(w^1)^2w^2Υ dξ dφ ∫^π_-π∫^1_-1w^1(w^2)^2Υ dξ dφ ∫^π_-π∫^1_-1w^1w^2Υ dξ dφ;∫^π_-π∫^1_-1w^1Ψ dξ dφ∫^π_-π∫^1_-1(w^1)^2Υ dξ dφ ∫^π_-π∫^1_-1w^1w^2Υ dξ dφ∫^π_-π∫^1_-1w^1Υ dξ dφ ]= [nU^1 nT(4U^1U^1 + 1)4 nTU^1U^24 nTU^1U^0; nT(4U^1U^1 + 1)12 nT^2 U^1(2U^1U^1 + 1) 4 nT^2 U^2(6U^1U^1 + 1) 4 nT^2 U^0(6U^1U^1 + 1);4 nTU^1U^2 4 nT^2 U^2(6U^1U^1 + 1) 4 nT^2 U^1(6U^2U^2 + 1) 24 nT^2 U^1U^2U^0;4 nTU^1U^0 4 nT^2 U^0(6U^1U^1 + 1) 24 nT^2 U^1U^2U^0 4 nT^2 U^1(6U^1U^1 + 6U^2U^2 + 5) ], and M_2 =∫_ℝ^3p^2gΨ⃗Ψ⃗^TdΞ:=[ M^2_00 M^2_01 M^2_02 M^2_03; M^2_10 M^2_11 M^2_12 M^2_13; M^2_20 M^2_21 M^2_22 M^2_23; M^2_30 M^2_31 M^2_32 M^2_33 ]= [∫^π_-π∫^1_-1w^2Φ dξ dφ ∫^π_-π∫^1_-1w^2w^1Ψ dξ dφ∫^π_-π∫^1_-1(w^2)^2Ψ dξ dφ∫^π_-π∫^1_-1w^2Ψ dξ dφ; ∫^π_-π∫^1_-1w^2w^1Ψ dξ dφ ∫^π_-π∫^1_-1w^2(w^1)^2Υ dξ dφ ∫^π_-π∫^1_-1w^1(w^2)^2Υ dξ dφ ∫^π_-π∫^1_-1w^1w^2Υ dξ dφ;∫^π_-π∫^1_-1(w^2)^2Ψ dξ dφ ∫^π_-π∫^1_-1(w^2)^2w^1Υ dξ dφ∫^π_-π∫^1_-1(w^2)^3Υ dξ dφ∫^π_-π∫^1_-1(w^2)^2Υ dξ dφ;∫^π_-π∫^1_-1w^2Ψ dξ dφ ∫^π_-π∫^1_-1w^1w^2Υ dξ dφ∫^π_-π∫^1_-1(w^2)^2Υ dξ dφ∫^π_-π∫^1_-1w^2Υ dξ dφ ]= [ nU^2 4 nTU^1U^2nT(4U^2U^2 + 1) 4 nTU^2U^0; 4 nTU^1U^2 4 nT^2U^2(6U^1U^1 + 1) 4 nT^2U^1(6U^2U^2 + 1) 24 nT^2U^1U^2U^0;nT(4U^2U^2 + 1) 4 nT^2U^1(6U^2U^2 + 1)12 nT^2U^2(2U^2U^2 + 1) 4 nT^2U^0(6U^2U^2 + 1); 4 nTU^2U^0 24 nT^2U^1U^2U^0 4 nT^2U^0(6U^2U^2 + 1) 4 nT^2U^2(6U^1U^1 + 6U^2U^2 + 5) ], wherew^1=ξ, w^2=√(1-ξ^2)sinφ, w^3=√(1-ξ^2)cosφ, andΦ(x,y,ξ,φ)=1/4π n (x,y)/(U^0(x,y)-w^1U^1(x,y)-w^2U^2(x,y))^3, Ψ(x,y,ξ,φ)=3/4π( nT)(x,y)/(U^0(x,y)-w^1U^1(x,y)-w^2U^2(x,y))^4, Υ(x,y,ξ,φ)= 3/π( nT^2)(x,y)/(U^0(x,y)-w^1U^1(x,y)-w^2U^2(x,y))^5.§.§.§ Equilibrium velocity distribution g_h(x,y,t,p⃗) Using W⃗_0:=W⃗_i+1/2,j^n derived in Section <ref> and the cell averages W⃗̅⃗_i+1,j and W⃗̅⃗_i,jreconstructs alinear polynomialW⃗_0(x)=W⃗_0+W⃗_0^x(x-x_i+1/2) + W⃗_0^y(y-y_j),where W⃗_0^x=1/Δ x(W⃗̅⃗_i+1,j-W⃗̅⃗_i,j) and W⃗_0^y=1/2Δ y(W⃗^n_i+1/2,j+1-W⃗^n_i+1/2,j-1). Again using the Taylor series expansion of g at the cell interface (x_i+1/2,y_j) givesg_h(x,y,t,p⃗)=g_0(1+a_0(x-x_i+1/2)+b_0(y-y_j)+A_0(t-t^n)),where (a_0,b_0,A_0) are the values of (a,b,A) at the point (x_i+1/2,y_j, t_n). Similarly, the linear systems for a_0, b_0 and A_0 can be derived as follows<a_0p^0>=W⃗_0^x, <b_0p^0>=W⃗_0^y, <A_0p^0+a_0p^1+b_0p^2>=0,orM_0^0a⃗_0=W⃗_0^x, M_0^0b⃗_0=W⃗_0^y,M_0^0A⃗_0= -M_1^0a⃗_0 -M_2^0b⃗_0,where the elements of M_0^0, M_1^0 and M_2^0 are given by(<ref>), (<ref>), and (<ref>) with n ,T,U^α instead of n ^n_i+1/2,j, T^n_i+1/2,j and U^n,α_i+1/2,j.Up to now, the initial gas distribution function f_h,0 and the equilibrium state g_h have been given. Substituting (<ref>) and (<ref>) into (<ref>)gives f̂(x_i+1/2,y_j,t,p⃗)=g_0(1-exp(-U_α,i+1/2,j^np^α/p^0τt̃))+g_0a_0v_1((t̃+p^0τ/U_α,i+1/2,j^np^α)exp(-U_α,i+1/2,j^np^α/p^0τt̃)-p^0τ/U_α,i+1/2,j^np^α)+g_0b_0v_2((t̃+p^0τ/U_α,i+1/2,j^np^α)exp(-U_α,i+1/2,j^np^α/p^0τt̃)-p^0τ/U_α,i+1/2,j^np^α)+g_0A_0(t̃-p^0τ/U_α,i+1/2,j^np^α(1-exp(-U_α,i+1/2,j^np^α/p^0τt̃)))+H[v_1]g_L(1-τ/U_α,i+1/2,j,L^np^α(p^0A_L+p^1a_L+p^2b_L)-a_Lv_1t̃-b_Lv_2t̃)exp(-U_α,i+1/2,j,L^np^α/p^0τt̃)+(1-H[v_1])g_R(1-τ/U_α,i+1/2,j,R^np^α(p^0A_R+p^1a_R+p^2a_R)-a_Rv_1t̃-b_Rv_2t̃)exp(-U_α,i+1/2,j,R^np^α/p^0τt̃), where t̃=t-t_n. Combining this f̂(x_i+1/2,y_j,t,p⃗) with (<ref>)can get the numerical flux F̂⃗̂^n_i+1/2,j. The numerical flux Ĝ⃗̂^n_i,j+1/2 can be obtained in the same procedure. §.§ 2D Navier-Stokes equations Becausethe previous simple expansion (<ref>) or (<ref>) cannot give theNavier-Stokes equations (<ref>)-(<ref>), one has to use the complicate Chapman-Enskog expansion(<ref>)-(<ref>)to design the genuine BGK schemes for the Navier-Stokes equations. On the other hand, for theNavier-Stokes equations, calculating the macroscopic quantities n, U^α, and p needs the value of the fluxes F⃗^k besidesW⃗. More specially, one has to first calculate the energy-momentum tensor T^αβ and particle four-flow N^α from the kinetic level and then useTheorem <ref>to calculate n, U^α, and p. It shows that there exists a very big difference between the genuine BGK schemes for the Euler and Navier-Stokes equations. In order to obtain T^αβ and N^α at t=t_n+1 from the kinetic level, multiplying (<ref>) by p^k/p^0givesp^k∂ f/∂ t + p^kp^1/p^0∂ f/∂ x + p^kp^2/p^0∂ f/∂ y = p^kU_αp^α(g-f)/p^0τ,k=1,2.Taking the moments of (<ref>) and (<ref>)and integrating them over the space-time domain I_i,j×[t_n,t_n+1) , respectively, yieldW⃗̅⃗^n+1_α,i,j = W⃗̅⃗^n_α,i,j - Δ t_n/Δ x(F̂⃗̂^n_α,i+1/2,j - F̂⃗̂^n_α,i-1/2,j) - Δ t_n/Δ y(Ĝ⃗̂^n_α,i,j+1/2 - Ĝ⃗̂^n_α,i,j-1/2) +S⃗^n_α,i,j , α=0,1,2,whereW⃗̅⃗^n_α,i,j = (N^α,T^1α,T^2α,T^0α)^n,T_i,j,F̂⃗̂^n_α,i+1/2,j = 1/Δ t_n∫_ℝ^3∫_t_n^t_n+1Ψ⃗p^1p^α/p^0f̂(x_i+1/2,y_j,t)dtd,Ĝ⃗̂^n_α,i,j+1/2 = 1/Δ t_n∫_ℝ^3∫_t_n^t_n+1Ψ⃗p^2p^α/p^0f̂(x_i,y_j+1/2,t)dtd,S⃗^n_0,i,j =0,S⃗^n_k,i,j = ∫_ℝ^3∫_t_n^t_n+1Ψ⃗p^kU_αp^α/p^0τ(g(x_i,y_j,t)-f̂(x_i,y_j,t))dtd,k=1,2.Our task is to get the approximate distributions f̂(x_i+1/2,y_j,t) and f̂(x_i,y_j+1/2,t) for the numerical fluxes and f̂(x_i,y_j,t) and g(x_i,y_j,t) for the source terms. The following will focus on the derivation off̂(x_i+1/2,y_j,t) with the help ofthe analytical solution (<ref>) of the 2D Anderson-Witting model. §.§.§ Initial distribution function f_h,0(x,y,t,p⃗)This section derives the initial distribution function f_h,0 for f̂(x_i+1/2,y_j,t). The Chapman-Enskog expansion(<ref>)-(<ref>) is rewritten as followsf(x,y,t,p⃗)=g(1-τ/U_αp^α(A^cep^0+a^cep^1+b^cep^2+c^cep^3))+O(τ^2),where A^ce=A_β^cep^β+A^ce_4, a^ce =a^ce_βp^β+a^ce_4, b^ce =b^ce_βp^β+b^ce_4, c^ce =c^ce_βp^β+c^ce_4, andA^ce_β = -1/T∇^<0U^β> + U_β/T^2(∇^0T-T/ nh∇^0p), A^ce_4 = -h/T^2(∇^0T-T/ nh∇^0p), a^ce_β = 1/T∇^<1U^β> - U_β/T^2(∇^1T-T/ nh∇^1p), a^ce_4 = h/T^2(∇^1T-T/ nh∇^1p),b_β = 1/T∇^<2U^β> - U_β/T^2(∇^2T-T/ nh∇^2p), b^ce_4 = h/T^2(∇^2T-T/ nh∇^2p),c^ce_β = 1/T∇^<3U^β> - U_β/T^2(∇^3T-T/ nh∇^3p), c^ce_4 = h/T^2(∇^3T-T/ nh∇^3p).It is observed from those expressions of A^ce, a^ce, b^ce, and c^ce that one has to compute the time derivatives, which are not requiredin the Euler case. Those time derivatives are approximately computed by using the following second-order extrapolation method: for any smooth function h(t), the first order derivative at t=t_n is numerically obtained by h_t(t_n) = h(t_n-2)(t_n-1-t_n)^2-h(t_n-1)(t_n-2-t_n)^2-h(t_n)((t_n-1-t_n)^2-(t_n-2-t_n)^2)/(t_n-2-t_n)(t_n-1-t_n)^2-(t_n-1-t_n)(t_n-2-t_n)^2.Usingthe Chapman-Enskog expansion (<ref>) andthe Taylor series expansion in terms of x gives the initial velocity distribution f_h,0(x,y,t^n,p⃗)={ g_L(1-τ/U_α,Lp^α(p^0A_L^ce+p^1a_L^ce+p^2b_L^ce+p^3c_L^ce)+a_Lx̃+b_Lỹ),x̃<0,g_R(1-τ/U_α,Rp^α(p^0A_R^ce+p^1a_R^ce+p^2b_R^ce+p^3c_R^ce)+a_Rx̃+b_Rỹ),x̃>0, . where x̃=x-x_i+1/2, ỹ=y-y_j, g_L and g_R denote the left and right Jüttner distributions at x_i+1/2 withy=y_j,t=t_n, the Taylor expansion coefficients (a_L,b_L) and (a_R,b_R) are calculated by using the same procedure as in the Euler case, while the Chapman-Enskog expansion coefficientsa_L^ce, a_R^ce, b_L^ce, b_R^ce, c_L^ce, c_R^ce andA_L^ce,A_R^ce are calculated by (<ref>).§.§.§ Equilibrium distribution functions g_h(x,y,t,p⃗)In order to obtain the equilibriumdistribution functions g_h(x,y,t,p⃗) for f̂(x_i+1/2,y_j,t), the particle four-flow N^α and the energy-momentum tensor T^αβ at (x_i+1/2,y_j) and t=t^nare defined by(N^α,T^α1,T^α2,T^α0)_i+1/2,j^n,T:=∫_ℝ^3∩p^1>0Ψ⃗p^αf_LdΞ+∫_ℝ^3∩p^1<0Ψ⃗p^αf_RdΞ, α=0,1,2,where f_L and f_R arethe left and right limits of f_h,0 with y=y_jat x=x_i+1/2.Using those definitions and Theorem <ref>, the macroscopic quantities n ^n_i+1/2,j, T^n_i+1/2,j and U_α,i+1/2,j^n can be obtained, and then one gets the Jüttner distribution function g_0 at (x_i+1/2,y_j,t_n). Similar to Section <ref>, we reconstruct a cell-vertex based linear polynomial and do the first-order Taylor series expansion of g at the cell interface (x_i+1/2,y_j), see (<ref>). However, it is different from the Euler case thatA_0 is obtained byM^0_0A⃗_0=W⃗^t_0,where W⃗^t_0 is calculated by using the second-order extrapolation (<ref>). After those, substituting f_h,0 and g_h into (<ref>) gets f̂(x_i+1/2,y_j,t). The distribution f̂(x_i,y_j+1/2,t) can be similarly obtained.§.§.§ Derivation ofsource terms S⃗_1,i,j and S⃗_2,i,jThe rest is to calculate f̂(x_i,y_j,t) and g(x_i,y_j,t) for the source terms S⃗_1,i,j and S⃗_2,i,j. The procedure is the same as the above except for taking the first-order Taylor series expansion at the cell-center (x_i,y_j).To be more specific, g and f_0 in the analytical solution(<ref>) of 2D Anderson-Witting model are replacedwith g_h(x,y,t,p⃗)=g_0(1+a_0(x-x_i)+b_0(y-y_j)+A_0(t-t_n)),andf_h,0(x,y,p⃗)= g_0(1-τ/U_α,0p^α(A_0^cep^0+a_0^cep^1+b_0^cep^2+c_0^cep^3)+a_0x̃+b_0ỹ),where (a_0,b_0,A_0) are the Taylor expansion coefficients at (x_i,y_j,t_n)calculated by the same procedure as that for f̂(x_i+1/2,y_j,t), x̃=x-x_i, ỹ=y-y_j, g_0 denotes the Jüttner distribution at (x_i,y_j, t_n), a_0^ce, b_0^ce, c_0^ce andA_0^ce are the Chapman-Enskog expansion coefficients at (x_i,y_j,t_n).It is worth noting that since f_h,0 is continuous at(x_i, y_j),there is no need to consider whether the left or right states should be taken here.The subroutine for the coefficientsin (<ref>) can be used to get those in f_h,0(x,y,p⃗).In order to define the equilibrium state g(x_i,y_j,t) in the source term, firstly we need to figure out the corresponding macroscopic quantities such as N^α and T^αβ which can be obtained by taking the moments of f̂(x_i,y_j,t). Using the Theorem <ref>, the macroscopic quantities such as n , T and u⃗ can be obtained. Thus the Jüttner distribution function at cell center (x_i,y_j) is derived according to the definition.Until now, all distributions are derived and the second-order accurate genuine BGK scheme (<ref>) is developed for the 2D ultra-relativisticNavier-Stokes equations.§ NUMERICAL EXPERIMENTS This section will solve several 1D and 2D problems on the ultra-relativistic fluid flow to demonstrate the accuracy and effectiveness of the present genuine BGK schemes, which will be compared tothe second-order accurate BGK-typeand KFVS schemes <cit.>.The collision time τis taken asτ=τ_m+C_2Δ t^α_n|P_L-P_R|/P_L+P_R,with τ_m = 5μ/4p for the viscous flow and τ_m=C_1Δ t^α_n for the inviscid flow, C_1, C_2 and α are three constants, P_L, P_R are the left and right limits of the pressure at the cell interface, respectively. Unless specifically stated, this section takes C_1=0.001, C_2=1.5 and α=1, the time step-size Δ t_n is determined by the CFL condition (<ref>) or (<ref>) with the CFL number of 0.4, and the characteristic variables are reconstructed with the van Leer limiter. §.§ 1D Euler case[Accuracy test]To check the accuracy of our BGK method, we first solve a smooth problem which describes a sine wave propagatingperiodically in the domain Ω=[0,1]. The initial conditions are taken as n (x,0)= 1+0.5sin(2π x),u_1(x,0)=0.2,p(x,0)=1, and corresponding exact solutions are given by n (x,t)= 1+0.5sin(2π (x-0.2t)),u_1(x,t)=0.2,p(x,t)=1.The computational domain Ω is divided into N uniform cells and the periodic boundary conditions are specified at x=0,1.Table <ref> gives the l^1- and l^2-errors at t=0.2 and corresponding convergence rates for the BGK scheme with α = 2 and C_1=C_2=1. The results show that a second-order rate of convergence can be obtained for our BGK scheme although the van Leer limiter losesslight accuracy.[Riemann problem I] This is a Riemann problem with the following initial data (n,u_1,p)(x,0)=(1.0,1.0,3.0),x<0.5, (1.0,-0.5,2.0),x>0.5. The initial discontinuity will evolve as a left-moving shock wave,a right-moving contact discontinuity, and a right-moving shock wave. Fig. <ref> displays the numerical results at t=0.5 and their close-ups obtained by using our BGK scheme (“∘"), the BGK-type scheme (“×"), and the KFVS scheme (“+") with 400 uniform cells in the domain [0,1], where the solid lines denote the exact solutions. It can be seen that our BGK scheme resolves the contact discontinuity better than the second-order accurate BGK-type and KFVS schemes, and they can well capture such wave configuration.[Riemann problem II]The initial conditions of the second Riemann problem are (n,u_1,p)(x,0)=(5.0,0.0,10.0),x<0.5, (1.0,0.0,0.5),x>0.5. Fig. <ref> shows the numerical solutions at t=0.5 obtained by using our BGK scheme (“∘"), the BGK-type scheme (“×"), and the KFVS scheme (“+") with 400 uniform cells within the domain [0,1], where the solid line denotes the exact solution. It is seen that the solutions consist of a left-moving rarefaction wave, a contact discontinuity, and a right-moving shock wave, the computed solutions well accord with the exact solutions, and the rarefaction and shock waves are well resolved. Moreover, our BGK scheme exhibits better resolution of the contact discontinuity than the BGK-type and KFVS schemes.[Riemann problem III]The initial data are (n,u_1,p)(x,0)=(1.0,-0.5,2.0),x<0.5, (1.0,0.5,2.0),x>0.5. The initial discontinuity will evolve as a left-moving rarefaction wave, a stationary contact discontinuity, and a right-moving rarefaction wave.Fig. <ref> plots the numerical results at t=0.5 obtained by using our BGK scheme (“∘"), the BGK-type scheme (“×"), and the KFVS scheme (“+") with 400 uniform cells in the domain [0,1], where the solid line denotes the exact solution. It is seen that there is a undershoot near the contact discontinuity in the number densitywhich usually happens in the non-relativistic cases. [Perturbed shock tube problem] The initial data are (n,u_1,p)(x,0)=(1.0,0.0,1.0),x<0.5, ( n _r,0.0,0.1),x>0.5,where n _r=0.125 - 0.0875sin(50(x-0.5)). It is a perturbed shock tube problem, which has widely been used to test the ability of the shock-capturing schemes in resolving small-scale flow features in the non-relativistic flow. Fig. <ref> plots the numerical results at t=0.5 in the computational domain Ω=[0,1] obtained by using our BGK scheme (“∘"), the BGK-type scheme (“×"), and the KFVS scheme (“+") with 400 uniform cells. Those are compared with the reference solution (the solid line) obtained by using the KFVS scheme with a finer mesh of 10000 uniform cells. It is seen thatthe shock wave is moving into a sinusoidal density field, some complex but smooth structures are generated at the left hand side of the shock wavewhen the shock wave interacts with the sine wave, and our BGK scheme is obviously better than the BGK-type and KFVS schemes in resolving those complex structures. Since the continuity equation in the Euler equations decouples from other equations for the pressure and velocity, one does not see the effect of perturbation in the pressure <cit.>.[Collision of blast waves]It is about the collision of blast waves andsimulated to evaluate the performance of the genuine BGK scheme and the BGK-type and KFVS schemes for the flow with strong discontinuities.The initial data are taken as follows (n,u_1,p)(x,0)=(1.0,0.0,100.0),0<x<0.1, (1.0,0.0,0.06),0.1<x<0.9, (1.0,0.0,10.0),0.9<x<1.0. Reflecting boundary conditions are specified at the two ends of the unit interval [0,1].Fig. <ref> plots the numerical results at t=0.75 obtained by using our BGK scheme (“∘"), the BGK-type scheme (“×"), and the KFVS scheme (“+") with 700 uniform cells within the domain [0,1]. It is found that the solutions at t=0.75 are bounded by two shock waves and those schemes can well resolve those shock waves. However, the genuine BGK scheme exhibits better resolution ofthe contact discontinuity than the BGK-type and KFVS schemes. §.§ 2D Euler case [Accuracy test]To check the accuracy of our BGK scheme,we solve a smooth problem which describes a sine wave propagating periodically in the domain Ω=[0,1]×[0,1] at an angle α=45^∘ with the x-axis.The initial conditions are taken as followsn (x,y,0)= 1+0.5sin(2π (x+y)), u_1(x,y,0)=u_2(x,y,0)=0.2, p(x,y,0)=1,so that the exact solution can be given byn (x,y,t)= 1+0.5sin(2π (x-0.2t + y-0.2t)), u_1(x,y,t)=u_2(x,y,t)=0.2, p(x,y,t)=1.The computational domain Ω is divided into N× N uniform cells and the periodic boundary conditions are specified. Table <ref> gives the l^1- and l^2- errors at t=0.1 and corresponding convergence rates for the BGK scheme with α = 2 and C_1=C_2=1. The results show that the 2D BGK scheme is second-order accurateandthe van Leer limiter affects the accuracy.To verify the capability of our genuine BGK scheme in capturing the complex 2D relativistic wave configurations, we will solve three inviscid problems: explosion in a box, cylindrical explosion, and ultra-relativistic jet problems. [Implosion in a box]This example considers a 2D Riemann problem inside a squared domain [0,2]×[0,2] with reflecting walls. A square with sidelength of 0.5 embedded in the center of the outer box of side length of 2. The number density is 4 and the pressure is 10 inside the small box while both the densityand the pressure are 1 outside of the small box. The fluid velocities are zero everywhere. Figs. <ref> and<ref> give the contours of the density, pressure and velocities at time t=3 and 12 obtained by our BGK scheme on the uniform mesh of 400×400cells, respectively. The results show that the genuine BGK scheme captures the complex wave interaction well. Fig. <ref> gives a comparison of the numerical densities along the line y=1 calculatedby using the genuine BGKscheme (“∘"),BGK-type scheme (“×"),and KFVS scheme (“+") respectively. Obviously, the genuine BGKschemeresolves the complex wave structurebetter than the BGK-type and KFVS schemes.[Cylindrical explosion problem] Initially, there isa high-density, high-pressure circle with aradius of 0.2 embedded in a low density, low pressure medium within a squared domain [0,1]×[0,1].Inside the circle, the number density is 2 and the pressure is 10, while outside the circlethe number density and pressure are 1 and0.3, respectively. The velocities are zero everywhere. Fig. <ref> displays the the contour plotsat t = 0.2 obtained by using the BGK scheme on the mesh of 200×200 uniform cells. The results show that a circular shock wave and a circular discontinuity travel away from the center, and a circular rarefaction wave propagates toward the center of the circle. Fig. <ref> gives a comparison of the number density and pressure along the line y=0.5 obtained by the BGK, BGK-type, and KFVS schemes, respectively. The symbols “∘" , “×" and “+" denote the solutions obtained by using the BGK, BGK-type and KFVS schemes. It can be observed that all of them give closer results. However, the BGK scheme resolves the discontinuities better than the KFVS.[Ultra-relativistic jet]The dynamics of relativistic jet relevant in astrophysics has beenwidely studied by numerical methods in the literature <cit.>.This testsimulates a relativistic jetwiththe computational region[0,12]×[-3.5,3.5]and α = C_1=C_2=1.The initial states for the relativistic jet beam are ( n _b,u_1,b,u_2,b,p_b)=(0.01,0.99,0.0,10.0),( n _m,u_1,m,u_2,m,p_m)=(1.0,0.0,0.0,10.0), where the subscripts b and mcorrespond to the beam andmedium, respectively.The initial relativistic jet is injected through a unit wide nozzle located at the middle of left boundary while a reflecting boundary is used outside of the nozzle. Outflow boundary conditions with zero gradients of variables are imposed at the other part of the domain boundary. Fig. <ref>shows the numerical results at t=5,6,7,8 obtained by our BGK schemeon the mesh of 600×350 uniform cells. The average speed of the jet head is 0.91 which matches the theoretical estimate 0.87 in <cit.>.§.§ Navier-Stokes case This section designs two examples ofviscous flowto test the genuine BGKscheme(<ref>) for the ultra-relativistic Navier-Stokes equations.Becausethe extrapolation (<ref>) requires the numerical solutions att=t_n-1 andt_n-2, the “initial” data at first several time levels have to be specified for the BGK schemein advance. In the following examples, the macroscopic variables at t=t_0+0.5Δ t_0 and t_0+Δ t_0 are first obtained by using the initial data, time partial derivatives at t=t_0, and BGK scheme proposed in Section <ref>, where the first orderpartial derivatives in time are derived by using the exact solutions. Then, the time partial derivatives at t=t_0+Δ t_0 for the macroscopic variables are calculated by using the extrapolation (<ref>), and the solutions are further evolved in time by the BGK scheme withthe extrapolation (<ref>).[longitudinally boost-invariant system] For ease of numerical implementation, this test focuses on the longitudinally boost-invariant systems. They are conveniently described in curvilinear coordinates x_m=(t̃, y, z, η), where t̃ = √(t^2-x^2) is the longitudinal proper time, η=1/2ln(t+x/t-x) is the space-time rapidity and (y, z) are the usual Cartesian coordinates in the plane transverse to the beam direction x.The systems are realized by assuming a specific “scaling" velocity profile u_1 = x/t along the beam direction, and the initial conditions are independent on the longitudinal reference frame (boost invariance), that is to say, they do not depend on η. The readers are referred to <cit.>for more details.Our computations consider the boost-invariant longitudinal expansion without transverse flow, so that the relativistic Navier-Stokes equations read ∂ p/∂t̃ + 4/3t̃(p-μ/3t̃)=0,∂n /∂t̃ = - n ∂_α U^α. Since u_1=x/t, U^0=t/t̃ and U^1=x/t̃, it holds that ∂_α U^α = 1/t̃.Thus the equation for n becomes ∂n /∂t̃ = - n /t̃. The analytical solutions can be given by p = C_1 t̃^-4/3 + 4/3μt̃^-1, n = C_2t̃^-1, where C_1=p_0(t_0^2-x_0^2)^2/3-4μ/3(t_0^2-x_0^2)^1/6 and C_2= n _0√(t_0^2-x_0^2).We take x_0=0, t_0=1, p_0=1,n _0=1, μ=0.0005, andΩ=[-t_0/2,t_0/2].Moreover, the time partial derivatives of n , u_1, p at t=t_0 are given by the exact solution.Fig. <ref> shows the number density, velocity and pressure at t=1.2 obtained by our 1D BGK schemewith 20 cells (“") and 40 cells (“∘"), respectively. The results show that the numerical results predicted by our BGK scheme fit the exact solutions very well. Table <ref> lists the l^1- and l^2-errors at t=1.2 and corresponding convergence rates for our BGK scheme. Those data show that a second-order rate of convergence can be obtained by our BGK scheme. [Heat conduction] This test considers the problem of heat conduction between two parallel plates, whichare assumed to be infiniteand separated by a distance H. Moreover, bothplates are always stationary.The temperatures of the lower and upper plates are givenby T_0 and T_1, respectively. The viscosity μ is a constant. Based on the above assumptions,the Navier-Stokes equations can be simplified as ∂/∂ y(1/T^2∂ T/∂ y) = 0, T(0)=T_0, T(H)=T_1,whoseanalytic solution is gotten as follows T(y)=HT_0T_1/HT_1-(T_1-T_0)y.Our computationtakes H=1, p=0.8, u_1=0.2, u_2=0, μ=5×10^-3, T_0=0.1, T_1=1.0002T_0, and 0.5(T_0+T_1) as the initial valuefor the temperature T in the entire domain. Moreover, the initial time partial derivativesare given by n _t(x,0)=0, v_1t(x,0)=0, v_2t(x,0)=0 and p_t(x,0)=0. Because u_1≠ 0, the the 2D BGK scheme should be used for numerical simulation.The left figure in Fig. <ref> plots the numerical temperature (“∘") obtained by the 2D BGK scheme in comparison withthe steady-stateanalytic solution (solid line) given by (<ref>). It is seen that the numerical solution is well comparable with the analytic. The right figure in Fig. <ref> shows convergence of the temperatureto the steady state measured in the l^1-error between the numericaland analytic solutions. § CONCLUSIONS The paper developedsecond-order accurate genuineBGKschemesin the framework of finite volume method for the 1D and 2Dultra-relativistic flows. Different from the existing KFVS or BGK-type schemes for the ultra-relativistic Euler equationsthepresent genuineBGK schemes werederived from the analytical solution of the Anderson-Witting model, which was given for the first time and included the “genuine” particle collisions in the gas transport process.The genuine BGK schemeswere also developed for theultra-relativistic viscous flows and two ultra-relativistic viscous examples were designed.Several 1D and 2D numerical experiments were conducted to demonstrate that the proposed BGK schemes were accurate and stable in simulatingultra-relativistic inviscid and viscousflows, and hadhigher resolution at the contact discontinuitythan the KFVS or BGK-type schemes. The present BGK schemes could be easilyextended to the 3D Cartesian grid for the ultra-relativisticflows and it was interesting to develop the genuine BGK schemes for the special and general relativistic flows.§ ACKNOWLEDGEMENTSThis work was partially supported by the Science Challenge Project, No. JCKY2016212A502, the Special Project on High-performance Computingunder the National Key R&D Program (No. 2016YFB0200603),and the National Natural Science Foundation of China (Nos. 91330205, 91630310, 11421101).plain
http://arxiv.org/abs/1704.08501v1
{ "authors": [ "Yaping Chen", "Yangyu Kuang", "Huazhong Tang" ], "categories": [ "math.NA", "76M12, 76M25, 76N15" ], "primary_category": "math.NA", "published": "20170427104721", "title": "Second-order accurate genuine BGK schemes for the ultra-relativistic flow simulations" }
Microservices: a Language-based ApproachClaudio Guidi italianaSoftware srl, Imola, Italy [email protected] Ivan Lanese Focus Team, University of Bologna/INRIA, Italy [email protected] Manuel Mazzara Innopolis University, Russian Federation [email protected] Fabrizio Montesi University of Southern Denmark [email protected] * Claudio Guidi, Ivan Lanese, Manuel Mazzara, Fabrizio Montesi December 30, 2023 ================================================================Microservices is an emerging development paradigm where software is obtained by composing autonomous entities, called (micro)ser­vices. However, microservice systems are currently developed using gene­ral-purpose programming languages that do not provide dedicated abstractions for service composition. Instead, current practice is focused on the deployment aspects of microservices, in particular by using containerization. In this chapter, we make the case for a language-based approach to the engineering of microservice architectures, which we believe is complementary to current practice. We discuss the approach in general, and then we instantiate it in terms of the Jolie programming language. § INTRODUCTIONMicroservices <cit.> is an architectural style stemming from Service-Oriented Architectures (SOAs) <cit.>. Its main idea is that applications are composed by small independent building blocks – the (micro)services – communicating via message passing. Recently, microservices have seen a dramatic growth in popularity, both in terms of hype and of concrete applications in real-life software <cit.>. Several companies are involved in a major refactoring of their backend systems <cit.> in order to improve scalability <cit.>.Current approaches for the development of server-side applications use mainstream programming languages. These languages, frequently based on the object-oriented para­digm, provide abstractions to manage the complexity of programs and their organization into modules. However, they are designed for the creation of single executable artifacts, called monoliths. The modules of a monolith cannot execute independently, since they interact by sharing resources (memory, databases, files,…). Microservices support a different view, enabling the organization of systems as collections of small independent components. Independent refers to the capability of executing each microservice on its own machine (if needed). This can be achieved because services have clearly defined boundaries and interact purely by means of message passing. Microservices inherit some features from SOAs, but they take the same ideas to a much finer granularity, from programming in the large to programming in the small. Indeed, differently from SOAs, microservices highlight the importance for services to be small, hence easily reusable, easily understood, and even easily rebuilt from scratch if needed. This recalls the single responsibility principle of object-oriented design <cit.>.Since it is convenient to abstract from the heterogeneity of possible machines (e.g., available local libraries and other details of the OS), it is useful to package a service and all its local dependencies inside a container. Container technologies, like Docker <cit.>, enable this abstraction by isolating the execution of a service from that of other applications on the same machine. Indeed, in the literature about microservices, the emphasis is on deployment: since microservices live inside containers they can be easily deployed at different locations. A major reason for this focus is that microservices are thought since their inception as a style to program in the cloud, where deployment and relocation play key roles. Even if microservices have now evolved well beyond cloud computing, the emphasis on deployment and containerization remains <cit.>. In this chapter, while supporting the current trend of microservices, we advocate for moving the emphasis from deployment to development, and in particular to the programming language used for development. We think that the chosen language should support the main mechanism used to build microservice architectures, namely service composition via message passing communications. Furthermore, in order to master the related complexity, we support the use of well-specified interfaces to govern communication. Mainstream languages currently used for the development of microservices do not provide enough support for such communication modalities. In particular, service coordination is currently programmed in an unstructured and ad-hoc way, which hides the communication structure behind less relevant low-level details.While the idea of current methodologies for developing microservices can be summarized as “it does not matter how you develop your microservices, provided that you deploy them in containers”, the key idea behind our methodology is “it does not matter how you deploy your microservices, provided that you build them using a microservice programming language”. We describe our methodology in general, and support the abstract discussion by showing how our ideas are implemented in the Jolie language <cit.>.§ LANGUAGE-BASED APPROACHThe fine granularity of microservices moves the complexity of applications from the implementation of services to their coordination. Because of this, concepts such as communication, interfaces, and dependencies are central to the development of microservice applications. We claim that such concepts should be available as first-class entities in a language that targets microservices, in order to support the translation of the design of a microservice architecture (MSA) into code without changing domain model. This reduces the risk of introducing errors or unexpected behaviours (e.g., by wrong usage of book-keeping variables).What are then the key ingredients that should be included in a microservice language? Since a main feature of microservices is that they have a small size, realistic applications are composed by a high number of microservices. Since microservices are independent, the interactions among them all happen by exchanging messages. Hence, programming an MSA requires to define large and complex message exchange structures. The key to a “good” microservice language is thus providing ways to modularly define and compose such structures, in order to tame complexity. We discuss such ways in the rest of this section. Interfaces In order to support modular programming, it is necessary that services can be deployed as “black boxes” whose implementation details are hidden. However, services should also provide the means to be composed in larger systems. A standard way of obtaining this is to describe via interfaces the functionalities that services provide to and require from the environment. Here, we consider interfaces to be sets of operations that can be remotely invoked. Operations may be either fully asynchronous or follow the typical request-response pattern. An operation is identified by a name and specifies the data types of the parameters used to invoke it (and possibly also of the response value).Once we accept that interfaces are first-class citizens in microservices, it makes sense to have operators to manipulate them. Since interfaces are sets of operations, it is natural to consider the usual set-theoretical operators, such as union and intersection. For example, a gateway service may offer an interface that is the union of all the interfaces of the services that it routes messages to.Ports Microservices may run in heterogeneous environments that use different communication technologies (e.g., TCP/IP sockets, Bluetooth, etc.) and data protocols (e.g., HTTPS, binary protocols, etc.). Moreover, a microservice may need to interact with many other services, each one possibly offering and/or requiring a different interface. A communication port concretely describes how some of the functionalities of a service are made available to the network, by specifying the three key elements above: interface, communication technology, and data protocol. Each service may be equipped with many ports, of two possible kinds. Input ports describe the functionalities that the service provides to the rest of the MSA. Conversely, output ports describe the functionalities that the service requires from the rest of the MSA. Ports should be specified separately from the implementation of a service, so that one can see what a service provides and what it needs without having to check its actual implementation. This recalls the use of type signatures for functions in procedural programming, with the difference that here the environment is heterogeneousand we thus need further information (communication medium, data protocol).Consider an online shopping service connected to both the Internet and a local intranet. This service may have 2 input ports, +Customers+ and +Admin+, and 1 output port, +Auth+. Input port +Customers+ exposes the interface that customers can use on the web, using HTTPS over TCP/IP sockets. Input port +Admin+ exposes the administration controls of the service to the local intranet, using a proprietary binary protocol over TCP/IP sockets. Finally, output port +Auth+ is used to access an authentication service in the local network. Workflows Service interactions may require to perform multiple communications. For example, our previous +Customers+ service may offer a "buy and ship" functionality, implemented as a structured protocol composed by multiple phases. First, the customer may select one or more products to buy. In the second phase, the customer sends her destination address and selects the shipment modality. Finally, the customer pays, which may require the execution of an entire sub-protocol, involving also a bank and the shipper. Since structured protocols appear repeatedly in microservices, supporting their programming is a key issue. Unfortunately, the programming of such workflows is not natively supported by mainstream languages, where all possible operations are always enabled. For example, consider a service implemented by an object that offers two operations +login+ and +pay+. Both operations are enabled at all times, but invoking +pay+ before +login+ raises an error. This causal dependency is programmed by using a book-keeping variable, which is set when +login+ is called and is read by method +pay+. Using book-keeping variables is error-prone, in particular it does not scale when the number of causality links increases <cit.>.A microservice language should therefore provide abstractions for programming workflows. For example, one can borrow ideas from BPEL <cit.>, process models <cit.> or behavioral types <cit.>, where the causal dependencies are expressed syntactically using operators such as sequential and parallel compositions, e.g., +login;pay+ would express that +pay+ becomes available only after +login+. All the vast literature on business process modeling <cit.> can offer useful abstractions to this regard.Processes A workflow defines the blueprint of the behavior of a service. However, at runtime, a service may interact with multiple clients and other external services. In our online shopping example, service +Customers+ may have to support multiple users. Beyond that, the authentication service may be used both by service +Customers+ and by other services, e.g., a +Billing+ service. This is in line with the principle that microservices can be reused in different contexts. A service should thus support multiple executions of its workflow, and such executions should operate concurrently (otherwise, a new request for using the service would have to wait for previous usages to finish before being served).A process is a running instance of a workflow, and a service may include many processes executing concurrently. The number of processes changes at runtime, since external parties may request the creation of a new process, and processes may terminate. Each process runs independently of the others, to avoid interference, and, as a consequence, it has its own private state.§ THE JOLIE LANGUAGE Jolie <cit.> is a language that targets microservices directly. Jolie was designed following the ideas discussed in Section <ref>. These were initially targeted at offering a language for programming distributed systems where all components are services, which later turned out to become the microservices paradigm.Jolie is an imperative language where standard constructs such as assignments, conditionals, and loops are combined with constructs dealing with distribution, communication, and services. Jolie takes inspiration from WS-BPEL <cit.>, an XML-based language for composing services, and from classical process calculi such as CCS <cit.> (indeed, the core semantics of Jolie is formally defined as a process calculus <cit.>), but transfers these ideas into a full-fledged programming language. While we refer to <cit.> for a detailed description of the Jolie language and its features, we discuss below the characteristics of Jolie that make it an instance of the language-based approach to microservices that we are advocating for.The connection between Jolie and MSAs is at a very intimate level, e.g., even a basic building block of imperative languages like variables has been restructured to fit into the microservice paradigm. Indeed, microservices interact by exchanging data that is typically structured as trees (e.g., JSON or XML, supported in HTTP and other protocols) or simpler structures (e.g., database records). Thus, Jolie variables always have a tree structure <cit.>, which allows the Jolie runtime to easily marshal and unmarshal data. It is clear that such a deep integration between language and microservice technologies cannot be obtained by just putting some additional library or framework on top of an existing language (which, e.g., would rely on variables as defined in the underlying language), but requires to design a new language from the very basic foundations.A main design decision of the Jolie language is the separation of concerns between behavior and deployment information <cit.>. Here with deployment information we mean both the addresses at which functionalities are exposed, and the communication technologies and data protocols used to interact with other services. In particular, as discussed in the previous section, each Jolie service is equipped with a set of ports: input ports through which the service makes its functionalities available, and output ports used to invoke external functionalities. Thanks to this separation of concerns, one can easily change how a Jolie microservice communicates with its environment without changing its behavior. If the online shopping service of Example <ref> is implemented in Jolie, its +Customers+ input port can be declared as:inputPort CustomersLocation: "socket://www.myonlineshop.it:8000" Protocol: https Interfaces: CustomersInterface The port declaration specifies the location where the port is exposed, which includes both the communication technology, in this case a TCP/IP socket, and the actual URL. Also, it declares the data protocol used for communication, in this case HTTPS. Finally, the port refers to the interface used for communication, which is described separately, and which can take, e.g., the form:interface CustomersInterface RequestResponse: getList( void )( productIdList ),getPrice( productId )( double ),... This interface provides two request-response operations, +getList+ and +getPrice+, and their signature. Types +void+ and +double+ are built-in, while +productIdList+ and +productId+ are user defined, hence their definition has to be provided. Having ports and interfaces as first-class entities in the language allows one to clearly understand how a service can be invoked, and which services it requires. Furthermore, the same functionality can be exposed in different ways just by using different input ports, without replicating the service. For instance, in the example above, one can define a new port providing the same functionality using the SOAP protocol. Finally, part of the port information, namely location and protocol, can be changed dynamically by the behavior, improving the flexibility. Jolie also supports workflows <cit.>. Indeed, each Jolie program is a workflow: it includes receive operations from input ports and send operations to output ports, combined with constructs such as sequence, conditional and loop. Notably, it also provides parallel composition to enable concurrency, and input-guarded choice to wait for multiple incoming messages. Input ports are available only when a corresponding receive is enabled, otherwise messages to this port are buffered.Jolie also provides novel workflow primitives that can be useful in practical scenarios. An example is the provide-until construct <cit.>, which allows for the programming of repetitive behavior driven by external participants. Using standard workflow operators and provide-until, we can easily program a workflow for interacting with customers in our online shopping service from Section <ref>.main login()( csets.sid )csets.sid = new ; provide [ addToCart( req )( resp )/* ... */] [ removeFromCart( req )( resp )/* ... */] until [ checkout( req )( resp )pay; ship] [ logout() ] The workflow above starts by waiting for the user to login (operation +login+ is a request-response that returns a fresh session identifier +sid+, used for correlating incoming messages <cit.> from the same client later on). We then enter a provide-until construct where the customer is allowed to invoke operations +addToCart+ and +removeFromCart+ multiple times, until either +checkout+ or +logout+ is invoked. In the case for +checkout+, we then enter another workflow that first invokes procedure +pay+ and then procedure +ship+ (each procedure defines its own workflow). From a single workflow multiple processes are generated. Indeed, at runtime, when a message reaches a service, correlation sets <cit.> (kept in the special +csets+ structure in the example above) are used to check whether it targets an already running process. If so, it is delivered to it. If not, and if it targets an initial operation of the workflow, a new process is spawned to manage it. Notably, multiple processes with the same workflow can be executed either concurrently, or sequentially. This last option is mainly used to program resource managers, which need to enforce mutual exclusion on the access to the resource.§ CONCLUSIONS AND RELATED WORKWe made the case for a linguistic approach to microservices, and we instantiated it on the Jolie language. Actually, any general-purpose language can be used to program microservices, but some of them are more oriented towards scalable applications and concurrency (both important aspects of microservices). Good examples of the latter are Erlang <cit.> and Go <cit.>. Between the two, Erlang is the nearest to our approach: it has one of the most mature implementations of processes, and some support for workflows based on the actor model (another relevant implementation of actors is the Akka framework <cit.> for Scala and Java). However, Erlang and Go do not separate behavior from deployment, and more concretely do not come with explicitly defined ports describing the dependencies and requirements of services. WS-BPEL <cit.> provides many of the features we described, including ports, interfaces, workflow and processes. However, it is just a composition language, and cannot be used to program single services. Also, WS-BPEL implementations are frequently too heavy for microservices. Our hope is that other languages following the language-based approach would emerge in the near future. This would also allow one to better understand which features are key in the approach, and which ones are just design decisions that can be changed. For instance, it would be interesting to understand whether language support for containerization would be useful, and which form it could take. Such support is currently absent in Jolie, but it would provide a better integration with the classic approach focused on deployment. A related topic would be to understand how to improve the synergy between Jolie and microservices on one side, and the cloud and IoT on the other side.plain
http://arxiv.org/abs/1704.08073v1
{ "authors": [ "Claudio Guidi", "Ivan Lanese", "Manuel Mazzara", "Fabrizio Montesi" ], "categories": [ "cs.PL", "cs.SE" ], "primary_category": "cs.PL", "published": "20170426121720", "title": "Microservices: a Language-based Approach" }
http://arxiv.org/abs/1704.08703v2
{ "authors": [ "Dominic V. Else", "Paul Fendley", "Jack Kemp", "Chetan Nayak" ], "categories": [ "cond-mat.stat-mech", "cond-mat.str-el", "math-ph", "math.MP" ], "primary_category": "cond-mat.stat-mech", "published": "20170427180030", "title": "Prethermal Strong Zero Modes and Topological Qubits" }
ł21_2F_1ϵ𝒩𝒪ḍØ𝒪m#1#1`=11 =manfnt ifnextchar [@tchout@tchout[1]@tchout[#1]tempcnta#1whilenumtempcnta>@"7F 0.3emtempcnta@ne@tchout ifnextchar[dubiousdubious[1]dubious[#1]tempboxaW@tchout#1tempdimatempboxa tempdimato 0ptW@tchout#1W@tchout#1@tchout[#1]`=12CERN-TH-2017-090, SLAC-PUB-16962 We present an analytic computation of the Higgs production cross section in the gluon fusion channel, which is differential in the components of the Higgs momentum and inclusive in the associated partonic radiation through NNLO in perturbative QCD. Our computation includes the necessary higher order terms in the dimensional regulator beyond the finite part that are required for renormalisation and collinear factorisation at N^3LO. We outline in detail the computational methods which we employ. We present numerical predictions for realistic final state observables, specifically distributions for the decay products of the Higgs boson in the γγ decay channel. § INTRODUCTION The discovery of the Higgs boson in 2012 at the Large Hadron Collider (LHC) by the ATLAS <cit.> and CMS <cit.> experiments founded a new era of precision Higgs physics. The impressive statistical accuracy of the experimental measurementshas lead to tight constraints on the couplings of Higgs boson interactions.The Run 2 of the LHC promises a plethora of further measurements which will allow to probe the nature of electroweak symmetry breaking with an unprecedented precision.In order to truly exploit the potential of the LHC, the excellent experimental results must becompared to equally precise theoretical predictions.The demand for high precision theory has been met in recent years with a flurryof calculations at next-to-leading (NLO) and next-to-next-to-leading-order (NNLO) in perturbative QCD. Higher order corrections are especially important for Higgs phenomenology,in part due to the slow convergence of the perturbative expansion in the strong coupling constantfor the dominant mode of Higgs hadroproduction via gluon fusion. The gluon-fusion total cross section has been computed recently through next-to-next-to-next-to-leading order (N^3LO) in perturbative QCD <cit.> in the limit of an infinite top-quark mass.At this level of precision, effects that go beyond the leading approximation of the heavy-top quark effective theory or a treatment in pure QCD, such as quark mass effects or contributions from weak boson loops, are also important. The most accurate available predictions for these effects have been combined into a single theoretical prediction for the inclusive Higgs cross section in ref. <cit.>. The Higgs cross section is measured in signal-rich regions of phase space which areshaped by carefully designed experimental event selections. Further restrictions on the phase space of the Higgs boson, its decay products and the associated radiation are required by the detector geometry and response. Within their defined acceptance, the experiments have superb capabilities tomeasure a multitude of kinematic distributions for the Higgs boson and its decay products in the years to come. In addition to the total cross section, it is therefore imperative to have precise theoretical predictions for differential cross sections. Recently, the pp → H+1 jet fully differential cross section has been computed at NNLO <cit.>. Combined with the N^3LO inclusive cross section, this has allowed to compute the N^3LO Higgs cross sectionwith a jet-veto <cit.>.This is a first example of a differential cross section in gluon-fusion at this perturbative order. If a fully differential parton-level Monte-Carlo at N^3LOis achieved in the future, it will become possible to assess the efficiency of the majority of event selection criteria at the same levelof accuracy in perturbative QCD as the jet-veto efficiency. To achieve this goal, one could attempt to generalise any of the available methods at NNLO (sector-decomposition <cit.>, slicing <cit.>, subtraction <cit.>, reweighting <cit.> and other methods <cit.>) to the next perturbative order. A generalisation of any of these methods would be a formidable task. An intermediate goal could therefore be to compute first some specific differential distributions of particular importance, which are the main ingredients for a fully differential N^3LO parton-level Monte-Carlo within a slicing method. The aim of this article is to compute the Higgs cross section fully differential in all components of the Higgs momentum and its decay products, treating the associated QCD radiation inclusively (integrating over the unrestricted phase space of all partons in the final state) at NNLO.In our computation, we maintain the full dependence of the “Higgs-differential cross section” on the dimensional regulator. This is necessary in order to enable the construction of the counter terms for ultraviolet and initial-state collinear divergences at N^3LO. Our method employs reverse unitarity <cit.>,to map the phase space integrations over partonic radiation onto their loop integral duals. We then reduce the integrals appearing in the partonic cross sections to master integrals using integration-by-part identities <cit.> and the Laporta algorithm <cit.>. The master integrals corresponding to the partonic cross sections with two partons in the final state are novel and we evaluate themusing two methods: the method of differential equations <cit.> and adirect integration of the angular phase space variables <cit.>. We are able to express all master integrals in terms of hypergeometric functions which are valid at all orders in the dimensional regulator ϵ=(4-d)/2. These hypergeometric functions can be expanded to practically any order around the limit ϵ=0 in terms of polylogarithms. As a result we obtain an analytic formula for all required partonic cross sections through NNLO, including the higher order terms in the ϵ expansion which are needed as input for a future N^3LO calculation.We implement our results in a computer code and use it to obtain predictions at NNLO for various differential distributionswhich are of interest for LHC experiments. In addition to computing the transverse momentum and rapidity distribution forthe Higgs boson through NNLO,we analyse various differential properties of the production and subsequent decay of the Higgs to two photons. This article is structured as follows.In section <ref> we introduce in detail our definition of “Higgs-differential” cross sections.In section <ref> we outline how we separate the integration of Higgs boson and QCD radiation degrees of freedom based on phase space factorisation and the reverse unitarity methodology. In section <ref> we perform explicitly the computation of the partonic cross sections for Higgs boson production through NNLO in gluon fusion. Next, we study several key Higgs boson LHC observables in section <ref> that have been obtained using a numeric code built upon our analytic results. Finally, we give our conclusions in section <ref>.Note that while we only study differential Higgs boson observables in this article, our “Higgs-differential” method is not specific to Higgs boson processes. In fact, it relies solely on the fact that the Standard-Model Higgs boson is a singlet of QCD. As such, it can also be applied to compute differential distributions for other colourless final states, such as Drell-Yan or diboson production.§ SETUP FOR DIFFERENTIAL CROSS SECTIONS We consider the production of a Higgs boson in proton-proton collisions:Proton(P_1) +Proton(P_2) → H(p_h) + X,where in parentheses we denote the momenta carried by the external particles in the process. The four-momenta of the protons in the hadronic center of mass frame are given byP_1 = √(S)/2(1, 0, 0,1 ),P_2 = √(S)/2(1, 0, 0, -1 ),where √(S) is the collider center of mass energy. The inclusive hadronic cross section is related to the one for the partonic processesi(p_1) + j(p_2) → H(p_h) + X,with momentap_1 = x_1 P_1,p_2=x_2 P_2,via the factorisation formula σ_PP→ H+X = ∑_i,j∫_0^1 dx_1 dx_2 f_i(x_1)f_j(x_2) σ̂_ij(S, x_1, x_2,m_h^2),In the above, p_h^2 = m_h^2 is the invariant mass of the Higgs boson. The sum over i and j runs over all possible initial state partons. f_i(x) are the parton distribution functions and σ̂_ij(S, x_1, x_2,m_h^2) are the partonic cross sections.In this work we are interested in deriving cross sections for observables Ø that are sensitive to the momentum of the Higgs boson p_h and do not depend on the details of the additional radiation X. An observable of this type translates into a measurement function 𝒥_Ø(p_h) which multiplies the differential cross section giving a weight to every phase space point. A typical example for Ø is a set of kinematic cuts on the Higgs; in this case 𝒥_Ø(p_h) is equal to one if p_h passes these cuts and zero otherwise, and it can be written as a product of Heaviside θ-functions. More complicated, experimentally relevant observables may also be considered as long as they only depend on p_h. For example, 𝒥_Ø may be used to weight the production cross section by appropriate Higgs boson decay matrix elements and phase spaces. We will refer to cross sections for such observables Ø that depend on the kinematic details of the production process only through the Higgs four-momentum as Higgs-differential cross sections.One possible way to parameterise the Higgs momentum isp_h ≡( E,p_x,p_y,p_z) = (√(p_T^2 + m_h^2)cosh Y,p_T cosϕ,p_T sinϕ, √(p_T^2 + m_h^2)sinh Y),whereY=1/2log( E+p_z/E-p_z),p_T=√(E^2-p_z^2-m_h^2).Here Y is the rapidity of the Higgs boson, p_T is its momentum in the plane which is transverse to the beam axis, and ϕ is the azimuthal angle. Due to the symmetry of scattering experiments in the azimuthal plane, partonic cross sections are always independent of ϕ. Without loss of generality, we may therefore write Higgs-differential cross sections asσ_PP→ H+X[ Ø] =∑_i,j∫_-∞^+∞dY ∫_0^∞ dp_T^2 ∫_0^2πdϕ/2π∫_0^1 dx_1 dx_2 f_i(x_1)f_j(x_2) ×d^2 σ̂_ij/d Y dp_T^2(S, x_1, x_2,m_h^2,Y,p_T^2) 𝒥_Ø(Y,p_T^2,ϕ,m_h^2),whered^2 σ̂_ij/d Y dp_T^2 is the partonic double-differential p_T and rapidity distribution.Results for the partonic cross sections are naturally expressed in terms of the kinematics of the partonic processes. We thus define the partonic center of mass energys=x_1 x_2 S=(p_1+p_2)^2and the ratiosτ = m_h^2/S,z = m_h^2/s = τ/x_1x_2.Furthermore, we introduce the two Lorentz-invariant quantities x and λλ≡s-2p_1· p_h/s-m_h^2,x ≡s(p_1+p_2-p_h)^2/(s-2p_1 · p_h)(s-2p_2 · p_h),and the shorthand notation z̅≡ 1-z, λ̅≡ 1-λ, x̅≡ 1-x. We can express p_T and Y in terms of these two new variables and the Bjorken-fractions:Y=1/2log[ x_1/x_21-z̅λ̅/1-z̅λ x/1-z̅λ],p_T^2 = s z̅^2λλ̅x̅/1-z̅ x λ.The Higgs-differential cross sections, when cast in terms of these variables, take the form:σ_PP→ H+X[Ø]= τ∑_i,j∫_τ^1 dz/z∫_τ/z^1 dx_1/x_1∫_0^1 dx ∫_0^1 dλ∫_0^2πdϕ/2π × f_i(x_1)f_j(τ/x_1 z) 1/zd^2 σ̂_ij/dx dλ(z,x,λ,m_h^2) 𝒥_Ø(x_1,z,x,λ,ϕ,m_h^2).Note that we absorbed the Jacobian of the variable transformation from (p_T,Y) to (x,λ) into the definition of the partonic Higgs-differential cross section. Having obtained the above formula, we are now ready to compute the cross section for any hadron collider observable which is differential in the Higgs boson momentum from a partonic Higgs-differential distribution in x, λ.§ THE HIGGS-DIFFERENTIAL PHASE SPACE In the previous section we parametrised the degrees of freedom corresponding to the momentum of the Higgs boson with variables z, x, λ and ϕ defined ad hoc. In the following it will become clear why this particular parametrization of the Higgs boson degrees of freedom is beneficial to our calculation. This section is divided in three parts. In the first part we discuss how the phase space of the final state can be separated into a part that depends solely on the Higgs boson kinematics, and one that captures all dependence on the additional radiation. We will determine a set of phase space integrals in such a way that we can carry out the integration over the radiation analytically while remaining differential in the Higgs boson degrees of freedom. In the second part we discuss the computation of these integrals using reverse unitarity. Finally, we will explicitly construct the parametrisation of p_h in terms of z, x, λ, and discuss the features of the remaining phase space.§.§ Separation of the Phase Space The partonic cross section at a given order in QCD perturbation theory is comprised of phase space integrals over a sum of real and virtual contributions with different multiplicities of partons in the final state. Schematically, we may write, σ̂_ij∼∑_m ∫ dΦ_H+mℳ_ij→ H+X,where X stands for a set of m final state partons. We are going to examine the integration of a matrix element for a fixed multiplicity of partons over the phase space measure dΦ_H+m. Due to soft and collinear singularities, such integrations are divergent in four dimensions and we regulate them in dimensional regularisation. We will elaborate on the detailed structure of the partonic matrix elements through NNLO in section <ref>.The integration measure for the phase space describing the production of a Higgs boson and m additional partons with outgoing momenta k_1,…,k_m is given bydΦ_H+m = d^dp_h/(2π)^d (2π)δ_+(p_h^2-m_h^2) [∏_i=1^m d^dk_i/(2π)^d (2π)δ_+(k_i^2)] (2π)^d δ^d(p_1+p_2-p_h-∑_i=1^m k_i),where δ_+(p^2-M^2)=θ(p^0-M)δ(p^2-M^2). In order to compute Higgs-differential cross sections it is natural to separate the integration over the momentum of the Higgs from the integral over momenta of the final state state partons. This can be achieved by inserting unity into eq. (<ref>) using1=∫d^dk/(2π)^d (2π)^dδ^d(k-∑_i=1^m k_i)∫_0^∞dμ^2/2π(2π)δ_+(k^2-μ^2).This identity allows us to factor the H plus m parton phase space into the phase space of two massive particles and the phase space of m massless partons:dΦ_H+m=∫_0^∞dμ^2/2πdΦ_HX(μ^2) dΦ_m(μ^2),wheredΦ_HX = d^dp_h/(2π)^d(2π)δ_+(p_h^2-m_h^2) d^dk/(2π)^d(2π)δ_+(k^2-μ^2) (2π)^dδ^d(p_1+p_2-p_h-k), dΦ_m = [∏_i=1^m d^dk_i/(2π)^d (2π)δ_+(k_i^2)] (2π)^dδ^d(k-∑_i=1^m k_i).Note that these formulae are valid for an arbitrary number of final state partons m≥ 0. A key ingredient of our method is that, because of the class of observables we restricted ourselves to, the integrals over the extra QCD radiation are not constrained by the definition of the measurement. This allows us to address the integration over dΦ_HX(μ^2) and dΦ_m(μ^2) separately.§.§ Final State Radiation and Reverse Unitarity In the present section we address how the analytic integration over additional final state partons can be performed. In particular, we advocate that the framework of reverse unitarity <cit.>, that has already been successfully used in the computation of the inclusive cross section <cit.>, provides a particularly efficient solution to this task. This framework exploits the duality between inclusive phase space integrals and loop integrals to treat them in a uniform way.Specifically, using Cutkosky's rule <cit.>, it is possible to express the on-shell constraints appearing in phase space integrals through cut propagators δ_+(q^2)→[1/q^2]_c = 1/2π iDisc(1/q^2) = 1/2π i(1/q^2+i0-1/q^2-i0).Cut propagators can be differentiated in a similar way to ordinary propagators with respect to their momenta, ∂/∂ q_μ( [ 1/q^2]_c)^ν = - ν( [ 1/q^2]_c)^ν+1 2 q^μ ,leading to identical integration-by-parts (IBP) identities <cit.> for inclusive phase space integrals as for their dual loop integrals.The fact that the cut propagator represents a delta function is reflected by the simplifying constraint that any integral containing a cut propagator raised to a negative power vanishes: ([1/q^2]_c)^-n = 0for n ≥ 0.IBP identities serve to relate different inclusive phase space and loop integrals and express them in terms of a finite set of so-called master integrals. Once a partonic cross section is represented in terms of a linear combination of master integrals, only those integrals have to be computed by other means. In refs. <cit.> a modification of the reverse unitarity framework was developed in order to allow for the computation of the partonic Drell-Yan and Higgs boson production cross section differential in the rapidity of the electro-weak final state boson. Here we want to take this procedure one step further in order to maintain all differential information about the Higgs boson.This can be simply achieved by refraining from including the integration over the Higgs boson momentum into the reverse unitarity framework. Exploiting again the schematic representation of our partonic cross sections eq. (<ref>) we may write σ̂_ij(S, x_1, x_2, m_h^2) ∼∑_m ∫_0^∞dμ^2/2πdΦ_HX×[ ∫ dΦ_m ℳ_ij→ H+X]_Reverse Unitarity.To put this into other words: the factorisation of the Higgs boson phase space and the QCD radiation phase space achieved by eq. (<ref>) allows us to carry out the integration over the parton momenta separately from the integration over the Higgs boson momentum. This enables us to use different methods for the two phase spaces without sacrificing any information about the differential properties of the Higgs boson. In particular, we can compute the QCD radiation phase space inclusively in dimensional regularisation. This lends itself naturally to the method of reverse unitarity. It allows us to apply IBP identities to thepartonic matrix elements depicted in the square brackets in eq. (<ref>) and express them in terms of differential master integrals.These master integrals can subsequently be computed by different means such as direct integration or differential equations, as we will demonstrate below.The resulting integrated matrix elements are given by a Laurent series in the dimensional regulator ϵ.The explicit poles correspond to the regulated soft and collinear singularities of the final state radiation. In contrast to conventional methods for the computation of differential cross sections, we circumvent the problem of subtracting infrared singularities among final state partons by performing these phase space integrals analytically in dimensional regularisation. We now turn to discuss the specific parametrisation of the remaining degrees of freedom.§.§ Parametrising the Higgs Boson Degrees of FreedomOnce the analytic integration over the partonic final state momenta has been performed, the remaining degrees of freedom are parametrised by the Higgs boson four momentum p_h, or equivalently by the collective momentum of the integrated final state partons k. The inclusive integration over p_h and k is plagued by infrared and collinear singularities when k^2=0 or transverse momenta vanish, p_h T^2 = k_T^2 = 0. These kinematic limits correspond to the configuration of a single resolved real emission and to Born level kinematics, respectively. We will now parametrise the measure of eq. (<ref>) in such a way that leftover divergences can be regulated in a straightforward manner. We shall see how the parameters x and λ introduced in section <ref> are actually a simple solution of this problem.We parametrise the scalar products of the initial state parton momenta with k by2k· p_1 = z̅λ s,2k· p_2 = z̅λ̃ s.We then find it convenient to exchange the variable λ̃ for x wherek^2 = μ^2 = s z̅^2 λλ̃ x, λ̃ = 1-λ/1-z̅λ x.The parameter λ then effectively measures the collective direction of the QCD radiation, while x corresponds to its invariant mass. Note that these new variables range in the interval [0,1]. The potential leftover singularities in the (λ,x) space are factorisedand lie on the faces of the [0,1]×[0,1] square without any extra divergences as the corners are approached.With these definitions we find after some algebra (see for example <cit.>) the following parametrisation for the two particle massive phase space measure,dΦ_HX= s^d/2-2/4 (2π)^d-1 dΩ_d-2 dλ z̅^d-3 (λλ̃)^d/2-2 (1-x)^d/2-2θ(s) θ(z̅) θ(λ) θ(λ̅)θ(x̅),where the integral over the (d-2)-dimensional solid angle can be easily carried out using ∫ dΩ_d = 2π^d/2/Γ(d/2).The total integration measure of eq. (<ref>) then becomesdΦ_H+m = dx z̅^2 λλ̅/(1-z̅λ x)^2 dΦ_HX(x,λ) dΦ_m(x,λ).It should be noted here that our parametrisation of μ^2 in terms of x and λ spoils the traditional factorisation of the phase space into a convolution over two independent phase spaces. Instead, our phase space is now factorised into an iterative form. First we perform the phase space integral over the QCD radiation phase space dΦ_m, obtaining a result as a function of x and λ. Afterwards we can perform the integral over the Higgs phase space dΦ_HX in terms of λ and finally over the additional parameter x.The expression given in eq. (<ref>) is valid for 2 or more final state partons. The cases of zero or one final state partons need to be addressed separately as they represent limiting cases of the general parametrisation above. It is however straightforward to parametrise these cases explicitly. For zero partons we finddΦ_H+0 = 2π/sδ(z̅).and for one final state parton we defined Φ_H+1 = s^d/2-2/4 (2π)^d-2z̅^d-3 dΩ_d-2 dλ(λλ̅)^d/2-2θ(s) θ(z̅) θ(λ) θ(λ̅). Now that we have derived an explicit parametrisation of the phase space for the Higgs boson it is useful to make contact again between our integration variablesand the actual properties of the Higgs boson. Let y be the rapidity of the Higgs boson in the partonic centre of mass frame, which is related to Y byY = y_0 + y,y_0 = 1/2logx_1/x_2.The partonic rapidity and transverse momentum of the Higgs boson are related to λ and x byy = 1/2log1-z̅λ̃/1-z̅λ,p_T^2= m_h^2/zz̅^2 (1-x)λλ̃.It is easy to see that the parametrisation obtained here corresponds to the choice of variables introduced in eq. (<ref>). The differential partonic cross section required to compute the Higgs-differential hadronic cross section in eq. (<ref>) is simply obtained by performing all integrations over the final state degrees of freedom, except for the integrations over x and λ.We would like to remark that, although the phase space parametrisation in this section was derived specifically for the case of Higgs boson production for definiteness, it actually holds for any single particle colourless final state. Moreover, extending the (x, λ) parametrisation to the case of the production of a colourless final state system composed of more particles, the separation of the phase space and reverse unitarity still work as discussed without the need of any further change.§ COMPUTATION OF THE HIGGS-DIFFERENTIAL CROSS SECTION THROUGH NNLO In the previous section we established a framework for the computation of Higgs-differential cross sections. In this section we explicitly compute the differential cross section for the production of a Higgs boson via the gluon fusion mechanism through NNLO in the infinite top mass limit. §.§ Partonic Cross Sections for Gluon FusionWe compute the Higgs boson cross section in an approximation to the full Standard Model where the top quark mass is considered to be infinite and internal top quark loops can be integrated out. This leads to an effective field theory where the Higgs is directly coupled to gluons <cit.> through an effective operator ℒ = ℒ_QCD - 1/4C^0 G_μνG^μνh.Here, ℒ_QCD is the QCD Lagrangian, h is the Higgs boson field, G_μν the gluon field strength and C^0 the Wilson coefficient <cit.> that arises from matching the effective theory to the full Standard Model.The QCD Lagrangian contains n_f massless quark fields with n_c colours.In the following we will compute perturbative corrections in the strong coupling constant α_S through NNLO using this effective theory: 1/zd^2 σ̂_ij/dx dλ(z,x,λ,m_h^2)= (C^0)^2σ̂_0η_ij(z,x,λ) =(C^0)^2σ̂_0∑_k=0^∞(α_S/π)^k η^(k)_ij(z,x,λ),where we normalised the coefficient functions η_ij to the Born cross section σ̂_0=π/8(n_c^2-1). Perturbative QCD calculations are plagued by the presence of ultraviolet and infrared divergencies that appear during the computational steps and cancel for well defined observables.We regulate these divergencies by applying the framework of dimensional regularisation in the MS scheme, continuing the number of space time dimensions to d=4-2ϵ.Ultraviolet finite observables are obtained by renormalising the parameters of the theory.The required redefinition of the strong coupling constant and of the Wilson coefficient are given by α_S^0=α_S(μ^2) (μ^2/4π)^ϵe^ϵγ_E Z_α(μ^2),C^0=C(μ^2)Z_C(μ^2), where γ_E is the Euler-Mascheroni constant. For convenience we provide the well known factors Z_α(μ^2)and Z_C(μ^2) in appendix <ref>. The Wilson coefficient can be found in appendix <ref>.At any order in perturbation theory we distinguish 6 different initial state configurations of partons:g(p_1)+g(p_2) →H(p_h)+Xq(p_1)+ g(p_2) → H(p_h) +X g(p_1)+ q(p_2) → H(p_h) +X q(p_1)+ q̅ (p_2) →H(p_h) +Xq(p_1)+ q (p_2) →H(p_h) +Xq(p_1)+ q^' (p_2) →H(p_h) +XHere g, q, q̅ and q^' represent a gluon, a quark, an anti-quark and a quark with a different flavour respectively. All other combinations of explicit (anti-)quark flavours and gluons can be obtained from the ones above. X represents a specific partonic final state. The partonic coefficient function for the individual channels at order n for m final state partons is given by η^(n)_ij→ H+X(z,x^',λ^')=N_ij/2 m_h^2 σ̂_0∫ dΦ_H+mδ(x-x^')δ(λ - λ^')∑ℳ^(n)_ij→ H+X,where ∑ℳ^(n)_ij→ H+X is the coefficient of α_S^n in the coupling constant expansion of the modulus squared of all amplitudes for partons i and j producing the final state H+X, summed over polarisations.The integration measure dΦ_H+m was defined in eq. (<ref>), while the initial state dependent prefactors N_ij are given byN_gg =1/4(1-ϵ)^2(n_c^2-1)^2,N_gq =N_qg=1/4(1-ϵ)(n_c^2-1)n_c,N_qq̅ =N_qq=N_qq^'=1/4n_c^2. For the differential cross section through NNLO we require amplitudes with up to two additional partons in the final state as well as contributions with up to two loops. At any given order α_S^n,the sum of the number of loops and the number of final state partons is equal to n. All purely virtual partonic cross sections are identical to those required for the computation of the inclusive Higgs boson cross section that were obtain at two loops in refs. <cit.>. At NLO we require tree level matrix elements with one additional parton in the final state (R). At NNLO we require real-virtual (RV) matrix elements with one loop and one additional parton in the final state and double-real (RR) matrix elements with two additional final state partons.In order to compute the partonic coefficient functions we generate the necessary Feynman diagrams using QGRAF <cit.>.We then perform spinor, tensor and colour algebra in a privatecode based on  <cit.>, a code based on  <cit.> and a privatepackage. The resultingexpressions represent the phase space and loop integrands for the partonic cross sections.Using the reverse unitarity framework discussed in section <ref>, we treat loop and phase space integration on equal footing and use IBP <cit.> identities to express the partonic cross sections in terms of master integrals. We then compute the master integrals explicitly, as discussed in section <ref>. Inserting them into the partonic matrix elements, we obtain the partonic cross sections as a Laurent expansion in the dimensional regulator.Each of the contributions corresponding to different parton and loop multiplicities is separately infrared divergent as made manifest by explicit poles in the dimensional regulator.After summing up the contributions, some of the divergencies cancel by virtue of the KLN theorem.The remaining divergencies are absorbed by a suitable redefinition of the parton distribution functions,f_i(x_i)=( Γ_ij∘ f_j^R )(x_i),where the convolution indicated with ∘ is defined by (f∘ g)(z) =∫_0^1 dx dy δ(z-x y) f(x) g(y)=∫_z^1 dx/x f(x) g(z/x).The infrared counterterms Γ_ij consist of convolutions of splitting functions P_ij^(n) and can be derived from the DGLAP equation; for reference we provide its explicit form in appendix <ref>. The remaining physical parton distribution functions f^R_i are process independent and are extracted from measurements.In inclusive calculations, it is often useful to employ the commutativity and associativity of the convolutions to rewrite them such that the infrared counter term Γ is convoluted with the partonic cross section before convoluting the result with the bare parton distributions. The first convolution between the counter term and the partonic cross section can thus be performed analytically. The required splitting functions were obtained in refs. <cit.> and the convolutions were performed for example in refs. <cit.>. For the purposes of this work, it is impractical to perform these convolutions analytically, as they depend on the additional parameters λ and x due to momentum conservation. We perform the convolution of the physical parton distribution functions and the infrared counter term therefore numerically.The infrared counter terms Γ themselves contain poles in ϵ. In order to obtain the correct contributions to the finite part, it is therefore necessary to compute the lower order partonic cross sections to higher than finite power in the dimensional regulator.To obtain a finite fixed order differential cross section we expand the product of bare parton distribution functions f_i, Wilson coefficient C^0 and partonic coefficient functions η_ij and truncate the product at fixed order. After the combination of all these contributions, we complete the computation of the Higgs-differential production cross section through NNLO eq. (<ref>). Extending the computation to one order higher in the strong coupling constant (N^3LO) will require as an input the partonic cross sections computed to one order higher in the dimensional regulator.One of the main results of this work are the analytic expressions for all the necessary partonic cross sections through NNLO up to and including order ϵ^1, as required for an N^3LO computation.This result is available in electronic form in an ancillary file submitted together with this publication.§.§ Evaluation of Master Integrals In this section we elaborate on the computation of the master integrals that serve as building blocks for our partonic cross section.To validate our results, we follow two different strategies. The first is based on the method of differential equations <cit.> and is well established in the field of high order computations. As both strategies lead to identical results we will not discuss this method in more detail.The second approach is based on direct evaluation of phase space and loop integrals.Here, we discuss only master integrals involving at least one final state parton as purely virtual corrections are well known <cit.>. At NLO the only master integral is simply given by the phase space measure eq. (<ref>). §.§.§ Double-real master integrals At NNLO there are contributions with two final state partons (double-real) as well as one-loop corrections to the emission of a single final state parton (real-virtual). Let us consider the double-real radiation (RR) first. We find eight master integrals M_i^RR, corresponding to the six diagrams in Figure <ref>. Propagators crossing the dashed line in the diagrams represent cut propagators, while the double line marks the massive Higgs boson propagator. The fact that we are interested in Higgs-differential master integrals means that the momentum of the Higgs boson is completely fixed, so that no integration over the degrees of freedom of the Higgs boson occurs.QWmastsIn accordance with the phase space definitions of eqs. (<ref>) and (<ref>), all RR master integrals take then the form I(p,q;α,β) ≡∫d^dl/(2π)^d-2δ(l^2)δ((l-k)^2)/(l+p)^2α(l+q)^2β[ k^2≠ 0,;p^2=0,; q^2∈ℝ ] where again we denoted with k the momentum of the parton system.The generalised exponents α and βand the momenta q and p specify the propagators that appearin the respective master integral. In particular,p and q are linear combinations of p_1, p_2 and k. The integral over l stands for the integral over one of the two final state parton momenta. The result is then a function of all Lorentz invariants of the system.The integral I(p,q;α,β) can be evaluated for generic values of the parameters in terms of Appell's Hypergeometric function <cit.>:I(p,q;α,β)=(-1)^α+βπ ^-1+ϵ 2^-3-β +2 ϵ (k^2)^-ϵΓ (1-β -ϵ)/Γ (2-β -2 ϵ)(q^2-q· k)^α-β/(k^2 (q· p)-q^2 (k· p))^α× × F_1(α ;1-β-ϵ,1-β -ϵ;2-β -2 ϵ;1-1+√(1-4v_11)/2 v_12,1-1-√(1-4v_11)/2 v_12)where we have defined the auxiliary quantities,1-4v_11 ≡(q· k)^2-k^2q^2/(q^2-q· k)^2 1 , 2v_12 ≡q^2 (k· p)-k^2 (q· p)/(p· k)(q^2-q· k)k^2(q· p)/(p· k)(q· k). The eight master integrals, shown in Figure <ref>, contributing to the RR partonic cross section,expressed in terms of the function I(p,q;α,β), are explicitly given by M_1^RR = C_RR(1-2 ϵ) I(0,0;0,0), M_2^RR =-C_RRϵ/1-λ x I(0,p_1+p_2;0,1), M_3^RR = C_RRλϵ(λ ^2 x ^2-2 λx +x -+1)/1-λ x I(p_2,k-(p_1+p_2);1,1),M_4^RR = C_RR(1-λ) ϵI(p_1,p_1+p_2;1,1), M_5^RR =-C_RR(1-λ) λ(1-x) ^2 ϵ/1-λ x I(p_2,k-p_1;1,1) ,M_6^RR = C_RRλϵ (1-x )/1-λ x I(p_2,p_1+p_2;1,1),M_7^RR = C_RR(1-λ) ϵ(λ ^2 ^2 x(1-x)-+1)/(1-λ x )^2I(p_1,k-(p_1+p_2);1,1),M_8^RR = C_RR(1-λ) λ x ^2 ϵ/1-λ x I(p_2,p_1;1,1), where M_6^RR and M_7^RR are related respectively to M_4^RR and M_3^RR under the exchange of p_1 with p_2. The rational prefactors and the C_RR in the above definitions serve as a normalisation for the master integrals such that their expansion in the dimensional regulator is given by a pure function of uniform transcendental weight. This is true for all but M_2^RR which still contains a root of a polynomial in our variables. In particular, in the prefactor C_RR we absorb additional factors that result from the phase space measure of the integration over the Higgs boson momentum, eq. (<ref>),C_RR=ϵ^3(μ^2)^2ϵ e^2 ϵγ_E (4π)^-2-ϵ (m_h^2)^-ϵ z^ϵ/Γ(1-ϵ) ^-2ϵ(λλ̅)^-ϵ(1-x)^-ϵ(1- x λ)^ϵ.We note that the number of differential master integrals is less than the number of inclusive master integrals found in the double real cross section in ref. <cit.>. §.§.§ Real-virtual master integrals In the case of the RV master integrals, the integral over the final state parton phase space becomes trivial and all master integrals are simply given by one loop integrals.We define the two well known functions,Bub(p^2) =∫d^dl/i(2π)^d1/l^2(l+p)^2=(4 π )^-2+ϵ (-p^2)^-ϵΓ (1-ϵ )^2 Γ (ϵ +1)/ϵ(1-2 ϵ ) Γ (1-2 ϵ ), Box(q_1,q_2,q_3) =∫d^dl/i(2π)^d1/l^2(l+q_1)^2(l+q_1+q_2)^2(l+q_1+q_2+q_2)^2.The box integral was computed for example in ref. <cit.>. With this we choose the following seven RV differential master integrals:M_1^RV =-2/3ϵ (1-2ϵ)C_RV [Bub((p_1+p_2)^2 )],M_2^RV =-2/3ϵ (1-2ϵ)C_RV [Bub(p_h^2)],M_3^RV =-2/3ϵ (1-2ϵ)C_RV [Bub((p_2-p_h)^2)],M_4^RV =-ϵ^2s^2 λ̅ C_RV [Box(p_2,p_1,-p_h) ],M_5^RV =ϵ^2 s^2 λ C_RV {11 λ̅[Box(p_2,-p_h,p_1)] - 23 [Box(p_1,p_2,-p_h)] },M_6^RV =-2/3ϵ (1-2ϵ)C_RV [ Bub((p_1-p_h)^2)],M_7^RV =ϵ^2 s^2 λ C_RV {- λ̅[Box(p_2,-p_h,p_1)] + 2 [Box(p_1,p_2,-p_h)] }.The normalisation factor C_RV again absorbs part of the integration measure of the Higgs boson momentum and is given byC_RV=ϵ^2 (μ^2)^2ϵe^2 ϵγ_E(4 π)^-1- ϵ(m_h^2)^-ϵ z^ϵ/2Γ (1-ϵ) ^-2 ϵ (λλ̅)^-ϵ.Note that we includethe usual constants arising in theMS renormalisation procedure in the definition of C_RR and C_RV. §.§.§ Expansion in the dimensional regulator For further numerical evaluation, we need to expand the master integrals as a Laurent series in the dimensional regulator ϵ. The Laurent expansions for the RV master integrals are well known, so we comment here only on the expansion of the RR integrals. For masters M_1, M_4 and M_5, Appell's hypergeometric function can be reduced to Gauss' hypergeometric function _2F_1, therefore one can obtain the ϵ expansion with well-known tools such as thepackage <cit.>. For the remaining master integrals we can directly expand Appell's hypergeometric function through weight three, starting from the following integral representation:F_1(1;-ϵ,-ϵ;1-2ϵ;x,y) = (-2ϵ)∫_0^1 dt  (1-t)^-1-2ϵ(1-y t)^ϵ(1-x t)^ϵ=(-2ϵ)x̅^ϵy̅^ϵ∫_0^1dt t^-1-2ϵ[(1-(1-tx')^ϵ)(1-(1-ty')^ϵ) -(1-(1-tx')^ϵ)-(1-(1-ty')^ϵ)] +(-2ϵ)x̅^ϵy̅^ϵ∫_0^1dt t^-1-2ϵ.Here we have defined x' = x/x-1 and similarly for y. With this, the first integral in the second equality in eq. (<ref>) is finite and can be expanded in ϵ before integrating. The divergence is captured by the second integral, which can be easily integrated for generic ϵ. This yields the following expansion for the F_1:F_1(1;-ϵ,-ϵ;1-2ϵ;x,y)=1+ ϵln(x̅y̅) -2 ϵ^2 [ _2(x) + _2(y) + 1/4ln^2( x̅/y̅) ] -2 ϵ^3 {2 _3(x) +2 _3(y) -_3( x/y) + _3 ( x̅/y̅) + _3 ( y x̅/x y̅). -ln( x̅/y̅) _2 (y x̅/x y̅)-ζ_3 -1/12ln^3 ( x̅/y̅) . -1/6ln^3( -x/y)+1/2ln(x) ln^2 ( x̅/y̅) } +O( ϵ^4).The above expression is real-valued for x ∈ [0,1], y<0.This allows us to express all coefficients in the Laurent series in terms of real valued classical polylogarithms, enabling a fast and stable numerical evaluation <cit.>.§.§ Partonic Coefficient Functions In the previous section we obtained analytic results for all partonic coefficient functions required for the computation of the differential Higgs boson cross section through NNLO. Moreover we computed the coefficient functions to sufficiently high order in the dimensional regulator such that the infrared subtraction term required for an N^3LO computation can be constructed.It is important to note that our coefficient functions contain single poles in the variables , x and λ.These poles represent kinematic configurations where the Higgs boson degrees of freedom revert to lower multiplicity final state kinematics. The prime example is a singularity in the matrix elements when the transverse momentum of the Higgs boson tends to zero, which is the value of the transverse momentum at leading order. Specifically, these singularities are of the form {^-1+a_1 ϵ,x^-1+a_2 ϵ,(1-x)^-1+a_3 ϵ,λ^-1+a_4 ϵ,(1-λ)^-1+a_5 ϵ} where the coefficients a_i are small integer numbers. When integrating over the variables , λ and x we encounter singularities that lie at the boundaries of the integration range of our variables. This is the case, for example, when we are computing observables like the rapidity distribution of the Higgs boson or the inclusive cross section. In order to be able to compute such observables we will have to regulate the divergences, which we illustrate in the following example.Consider a function f(x) = x^-1+aϵf_h(x), for some integer a and with f_h(x) holomorphic around x=0. We are interested in integrating the function over a test function ϕ(x) on the range [0,1]. In the case of our Higgs-differential cross section, the test function ϕ(x) corresponds to the product of the parton luminosity and the measurement function. We can explicitly subtract the divergence at x=0 and integrate by parts to obtainI = ∫_0^1 dx f(x) ϕ(x)= ∫_0^1 x^-1+aϵ f_h(x) ϕ(x)= ∫_0^1 dx x^-1+aϵ[f_h(x)ϕ(x)-f_h(0) ϕ(0)] +1/aϵ f(0) ϕ(0).We now want to give an expression for the partonic cross section that is finite even if all inclusive integrations are performed. To this end we define in a slight abuse of notation,f_s(0)≡δ(x)[x^-1+aϵ-1/a ϵ]f_h(0).Here the δ distribution is to be understood as acting only on the test function and not on its coefficient in the square bracket. It is easy to see that f_s(0) integrates to zero. We can therefore regulate the integrand f(x) by subtracting f_s(0),I=∫_0^1 dx f(x) ϕ(x)=∫_0^1 dx (f(x)- f_s(0)) ϕ(x),so that every term of its ϵ expansion can be integrated numerically.In the case of our Higgs-differential cross sections, we need to regulate potential end-point divergences in the three remaining variables , x and λ, c.f. eq. (<ref>). We define the distributions σ_s that subtract the limits of σ(, x, λ) and label them by the kinematic limit of the cross section that they reproduce. For example σ(,0,λ) takes care of the limit of the cross section as x goes to zero. After partial fractioning to avoid simultaneous singularities on both endpoints of the integral, we obtain the following decomposition, σ_f( ,x,λ)≡σ( ,x,λ)-σ_s (,x,1)-σ_s (,x,0 )-σ_s (,1,λ)-σ_s (,0,λ)-σ_s (0,x,λ)+σ_s (,1,1)+σ_s (,1,0)+σ_s (,0,1 )+σ_s (,0,0)+σ_s (0,x,1)+σ_s (0,x,0)+σ_s (0,1,λ)+σ_s (0,0,λ )-σ_s (0,1,1)-σ_s (0,1,0)-σ_s (0,0,1)-σ_s (0,0,0). One main result of this article is the analytic computation of the partonic coefficient functions η^(k)_ij(z,x,λ) as defined in eq. (<ref>). We created finite versions of this coefficient functions in the spirit discussed above and provide them inreadable form in an ancillary file together with the arXiv submission of this article. Specifically, we provide all 18 different kinematic configurations of these coefficient functions as in eq. (<ref>), each as a sum of products of rational coefficients and master integrals.§ NUMERICAL RESULTS FOR THE HIGGS DIPHOTON SIGNALIn this section, we carry out numerically the remaining integrations which are necessary in order to obtain hadronic Higgs-differential cross sections. We present results for the LHC at 13 TeVin order to showcase the types of observables which can be readily computed with our approach.While our analytical results are original and extend the literature of NNLO Higgs-differential cross sections at 𝒪(ϵ), our numerical predictions for the finite part can be compared to available computer codes. We have validated our numerical implementation against the predictions of HNNLO <cit.> and MCFM <cit.> and we have found good agreement within Monte-Carlo uncertainties. For our numerical studies we use NNLO MMHT parton distribution functions <cit.> throughout, as available from  <cit.>. Their default value for the strong coupling constant α_S(m_Z)=0.118 and the corresponding three-loop running are also adopted. We set the Higgs boson mass to m_h=125 GeV and neglect its width. We equate the renormalisation and factorisation scales for simplicity and we choose μ=m_h/2 as a central value. As it is common practice, we estimate the effect of missing higher order corrections by varying μ by a factor of two around its central value.We start by showing distributions for the inclusive production of a Higgs boson via gluon fusion. In Figure <ref> we present the unbinned rapidity distribution of the Higgs boson. The bands correspond to the variation of the cross section at LO, NLO and NNLO in our default μ scale range. In Figure <ref> we show the p_T distribution of the Higgs boson. In order to have a non-vanishing transverse momentum,the Higgs boson needs to recoil against additional radiation and therefore its p_T distribution is trivial at LO. In addition to simple inclusive distributions for a stable Higgs boson,we can also investigate its decays. Specifically, we present differential distributions for the Higgs diphoton signal after the application of typical selection cuts for the photons.The decay of the Higgs boson to two photons allows for precise measurementsof numerous properties (see e.g. <cit.>) due to its exceptionally clean experimental signature and it has played a crucial role in the discovery of the Higgs bosonitself <cit.>.We impose photon selection cuts which follow as closely as possible the diphoton analysis of ATLAS <cit.>. We require that the pseudorapidities of both photons satisfy η_γ < 2.37, together with η_γ∉[1.37,1.52],which implies that there are no photons in these regions.Furthermore, we require the photon with larger transverse momentum to satisfy p_T, γ_1>0.35m_h and the one with smaller transverse momentum to have p_T, γ_2>0.25 m_h. Since we have integrated out the associated radiation in the production of the Higgs boson, our analysis ignores photon isolation[This is an important cut for taming the diphoton background but it is a minor one for the Higgs signal process that we study here.].=In Figure <ref>,we present the rapidity distribution of the Higgs boson after the application of the photon selection cuts described above. The bands correspond to the values of the cross section at LO, NLO and NNLO in our default μ scale range. Due to the restrictions in the coverage of the pseudorapidity of the photons, the Higgs rapidity distribution manifests some non-smooth changes. In the middle panel we show the conventional K-factors, i.e. we normalise the rapidity distribution at LO, NLO and NNLO for a general scale μ to the LO prediction evaluated at a fixed scale μ = m_h/2. We observe that the relative size of QCD corrections at NLO and NNLO has a pattern which is similar to the one seen for the inclusive cross section, although the K-factors are not entirely uniform across bins due to the effect of the photon cuts. In the lower panel, we normalise the rapidity distribution at LO, NLO and NNLO for a general scale μ to the NLO prediction at NLO at a fixed scaled μ = m_h/2. This shows that the relative K-factor from NLO to NNLO is more uniform in rapidity.In Figure <ref>, we present the distribution of the pseudorapidity difference of the two photons. The distribution has a kinematic edge at leading order at Δη≃ 1.79.Above this point it features a Sudakov shoulder: fixed order perturbative corrections are not trustworthy and resummation is required. However, the bulk of the distribution can be calculated in fixed order perturbative QCD. In Figure <ref>, we present the p_T distribution of the leading photon. At LO, the photon p_T cannot exceed the value of m_h/2. In addition, the experimental selection imposes a lower p_T value at 0.35m_h. This severe restriction of the phase spaceleads to large corrections to the available perturbative results. As such, resummation would be required to obtain stable predictions in this kinematical regime. Beyond LO the phase space for larger values of p_T opens up and the distribution becomes more well-behaved beyond 100 GeV.These distributions are just a small selection of possible observables that can be computed in our framework. Combining the production with other decay modes is straightforward and can be used to study a number of phenomenologically relevant observables.§ CONCLUSIONIn this article, we presented differential distributions for Higgs boson observables at NNLO in perturbative QCD, obtained within our “Higgs-differential” framework. Our computational method is a departure from common frameworks for differential calculations in that it avoids explicit subtraction of infrared divergences at the cost of being inclusive in jet observables. This is achieved by separating the QCD radiation phase space from the phase space of the produced colour-neutral final state. By integrating the QCD radiation phase space inclusively in dimensional regularisation, soft and collinear divergences are made explicit as poles in the regulator. Furthermore, this separation enables us to employ reverse unitarity and related techniques that have been developed in inclusive calculations, in order to simplify the phase-space integrations over final-state partons which are produced in association with the Higgs boson.The NNLO cross section has been cast in terms of few master integrals, which we compute in an arbitrary number of dimensions in terms of standard hypergeometric functions that admit expansions to arbitrarily high order in the dimensional regulator. We have presented results that go beyond the finite term in the ϵ expansion, which were unknown in the literature. These new results are essential ingredients for the construction of collinear and UV counter terms for Higgs-differential cross sections at N^3LO. The master integrals encountered here do not depend on the exact nature of the colourless final state produced and could be directly re-used in a similar differential Drell-Yan calculation.Finally, we implemented the Higgs boson cross section through NNLO in a numeric code and tested its efficiency in kinematic distributions for the Higgs boson and its decay products in the diphoton signal. The predictions for these distributions were compared against results obtained from existing Monte-Carlo generators, thus validating our approach.The main motivation for the approach presented in this article is its extensibility to even higher orders in QCD perturbation theory. The general framework of separating the QCD radiation phase space from the Higgs phase space persists at higher orders. This completely isolates the numerical calculation of distributions from the complications of higher order QCD computations. These should only be reflected in the master integrals that appear at higher orders. In this respect, it has been encouraging that the NNLO master integrals for Higgs-differential cross sections are relatively simple to compute. Using techniques that have been honed in inclusive calculations at N^3LO, it should therefore be possible to compute the required master integrals, leading to differential predictions for a hadron collider at N^3LO in QCD perturbation theory. § ACKNOWLEDGEMENTWe thank Babis Anastasiou for inspiring discussions and useful comments on the manuscript. We are also grateful to Simone Alioli, Claude Duhr and Achilleas Lazopoulos for fruitful discussions and to Alexander Huss for numerical comparisons and valuable exchange of views. SL, AP and CS are supported by the ETH Grant ETH-21 14-1 and the Swiss National Science Foundation (SNSF) under contracts 165772 and 160814. The work of FD is supported by the U.S. Department of Energy (DOE) under contract DE-AC02-76SF00515. BM is supported by the European Commission through the ERC grants “pertQCD” and “HICCUP”.§ RENORMALIZATION FACTORSThe renormalisation factors for the strong coupling constant and Wilson coefficient (<ref>) required for a computation through N^3LO <cit.> are given byZ_α = 1+α_S/π(-β_0/ϵ)+(α_S/π)^2(β_0^2/ϵ ^2-β_1/2 ϵ)+(α_S/π)^3(-β_0^3/ϵ ^3+7 β_1 β_0/6 ϵ ^2-β_2/3 ϵ)+𝒪(α_S^4).Z_C = 1-α_S/π(β_0/ϵ) +(α_S/π)^2(β_0^2/ϵ^2-β_1/ϵ)-(α_S/π)^3(β_0^3/ϵ^3-2β_0 β_1/ϵ^2+β_2/ϵ) +𝒪(α_S^4). The coefficients at the various orders in the coupling constant β_i are given by the QCD beta function <cit.>.The infrared counter term Γ consists of convolutions <cit.>of splitting functions P_ij^(n) and can be derived from the DGLAP equation.Its perturbative expansion required for an N^3LO accurate calculation of the differential Higgs boson production cross section is given by Γ_ij = δ_ijδ(1-x)+ (α_S/π)P^(0)_ij/ϵ+ (α_S/π)^2[1/2ϵ^2(P^(0)_ik∘ P^(0)_kj-β_0P^(0)_ij)+1/2ϵP^(1)_kj]+ (α_S/π)^3[1/6ϵ^3(P^(0)_ik∘ P^(0)_kl∘ P^(0)_lj-3β_0P^(0)_ik∘ P^(0)_kj+2β_0^2P^(0)_ij). .+1/6ϵ^2(P^(1)_ik∘ P^(0)_kj+2P^(0)_ik∘ P^(1)_kj-2β_0 P^(1)_ij-2 β_1 P^(0)_ij)+1/3ϵP^(2)_ij].§ WILSON COEFFICIENTIn the effective theory with n_f light flavours and the top quark decoupled fromthe running of the strong coupling constant, the MS-scheme Wilson coefficient reads <cit.> C(μ^2) = - α_S/3 π v{ 1 +(α_S/π) 11/4+ (α_S/π)^2 [2777/288 - 19/16log(m_t^2/μ^2) -n_f(67/96+1/3log(m_t^2/μ^2))]+ (α_S/π)^3[-(6865/31104+ 77/1728log(m_t^2/μ^2)+ 1/18log^2(m_t^2/μ^2)) n_f^2 + (23/32log^2(m_t^2/μ^2) -55/54log(m_t^2/μ^2)+40291/20736 - 110779/13824ζ_3 ) n_f-2892659/41472+897943/9216ζ_3+ 209/64log^2(m_t^2/μ^2) - 1733/288log(m_t^2/μ^2)] +𝒪(α_S^4) }. JHEP
http://arxiv.org/abs/1704.08220v1
{ "authors": [ "Falko Dulat", "Simone Lionetti", "Bernhard Mistlberger", "Andrea Pelloni", "Caterina Specchia" ], "categories": [ "hep-ph" ], "primary_category": "hep-ph", "published": "20170426171216", "title": "Higgs-differential cross section at NNLO in dimensional regularisation" }
Computer Science University of [email protected] These authors contributed equally to this work. Computer Science University of [email protected] and Computer Engineering University of [email protected] Science University of [email protected] Cross-modal audio-visual perception has been a long-lasting topic in psychology and neurology, and various studies have discovered strong correlations in human perception of auditory and visual stimuli. Despite works in computational multimodal modeling, the problem of cross-modal audio-visual generation has not been systematically studied in the literature. In this paper, we make the first attempt to solve this cross-modal generation problem leveraging the power of deep generative adversarial training. Specifically, we use conditional generative adversarial networks to achieve cross-modal audio-visual generation of musical performances. We explore different encoding methods for audio and visual signals, and work on two scenarios: instrument-oriented generation and pose-oriented generation. Being the first to explore this new problem, we compose two new datasets with pairs of images and sounds of musical performances of different instruments. Our experiments using both classification and human evaluations demonstrate that our model has the ability to generate one modality, i.e., audio/visual, from the other modality, i.e., visual/audio, to a good extent. Our experiments on various design choices along with the datasets will facilitate future research in this new problem space. Deep Cross-Modal Audio-Visual Generation Chenliang Xu======================================== § INTRODUCTIONCross-modal perception, or intersensory phenomenon, has been a long-lasting research topic in numerous disciplines such as psychology <cit.> , neurology <cit.>, and human-computer interaction <cit.>, and recently gained attention in computer vision <cit.>, audition <cit.> and multimedia analysis <cit.>. In this paper, we focus on the problem of cross-modal audio-visual generation. Our system is trained with pairs of visual and audio signals, which are typically contained in videos, and is able to generate one modality (visual/audio) given observations from the other modality (audio/visual). Fig. <ref> shows results generated by our system on a musical performance video dataset. Learning from multimodal input is challenging—despite the many works in cross-modal analysis, a large portion of the effort, e.g., <cit.>, has been focused on indexing and retrieval instead of generation. Although joint representations of multiple modalities and their correlations are explored, these methods only need to retrieve samples that exist in a database. They do not, for example, need to model the details of the samples, which is required in data generation. On the contrary, the generation task requires generating novel images and sounds that are unseen or unheard, and is of great interest to many applications, such as creating art works <cit.> and zero-shot learning <cit.>. It requires learning a complex generative function that produces meaningful outputs. In the case of cross-modality generation, this function has to map from one modality space to the other modality space, making the problem even more challenging and interesting.Generative Adversarial Networks (GANs) <cit.> has become an emerging topic in deep generative models. Inspired by Reed et al.'s work on generating images conditioned on text captions  <cit.>, we design Conditional GANs for cross-modal audio-visual generation. Different from their work, we make the networks to handle intersensory generation—generate images conditioned on sounds and generate sounds conditioned on images. We explore two different tasks when generating images: instrument-oriented generation (see Fig. <ref>) and pose-oriented generation (see Fig. <ref>), where the latter task is treated as fine-grained generation comparing to the former. Another key aspect to the success of cross-modal generation is being able to effectively encode and decode information contained in different modalities. For images, Convolutional Neural Networks (CNNs) are known to perform well in various tasks. Therefore, we train a CNN and use the fully connected layer before softmax as the image encoder and use several deconvolution layers as the decoder/generator. For sounds, we also use CNNs to encode and decode. The input to the networks, however, cannot be the raw waveforms. Instead, we first transform the time-domain signal into the time-frequency or time-quefrency domain. We explore five different transformations and find that the log-mel spectrogram gives the best result. To explore this new problem space, we compose two datasets, e.g., Sub-URMP and INIS. The Sub-URMP dataset consists of paired images and sounds extracted from 107 single-instrument musical performance videos of 13 kinds of instruments in the University of Rochester Musical Performance (URMP) dataset <cit.>. In total 17,555 images are extracted and each image is paired with a half-second long sound clip. The INIS dataset contains ImageNet <cit.> images of five music instruments, e.g., drum, saxophone, piano, guitar and violin. We pair each image with a short sound clip of a solo performance of the corresponding instrument. We conduct experiments to evaluate the quality of our generated images and sound spectrograms using both classification and human evaluation. Our experiments demonstrate that our conditional GANs can, indeed, generate one modality (visual/audio) from the other modality (audio/visual) to a good extent at both the instrument-level and the pose-level. We also compare and evaluate various design choices in our experiments.The contributions are three-fold. First, to our best knowledge, we introduce the problem of cross-modal audio-visual generation and are the first to use GANs on intersensory generation. Second, we propose newnetwork structures and adversarial training strategies for cross-modal GANs. Third, we compose two datasets that will be released to facilitate future research in this new problem space.The paper is organized as follows. We discuss related work and background in Sec. <ref>. We introduce our network structure, training strategies and encoding methods in Sec. <ref>. We present our datasets in Sec. <ref> and experiments in Sec. <ref>. Finally, we conclude our paper in Sec. <ref>.§ RELATED WORKOur work differs from the various works in cross-modal retrieval <cit.> as stated in Sec. <ref>. In this section, we further distinguish our work from those in multimodal representation learning. Ngiam et al. <cit.> learn a shared representation between audio-visual modalities by training a stacked multimodal autoencoder. Srivastava and Salakhutdinov <cit.> propose a multimodal deep Boltzmann machine to learn a joint representation of images and their text tags. Kumar et al. <cit.> learn an audio-visual bimodal compositional model using sparse coding. Our work differs from them by using the adversarial training framework that allows us to learn a much deeper representation for the generator. Adversarial training has recently received a significant amount of attention <cit.>. It has been shown to be effective in various tasks, such as generating semantic segmentations <cit.>, improving object localization <cit.>, image-to-image translation <cit.> and enhancing speech <cit.>. We also use adversarial training but on a novelproblem of cross-modal audio-visual generation with music instruments and human poses that differs from other works. §.§ BackgroundGenerative Adversarial Networks (GANs) are introduced in the seminal work of Goodfellow et al. <cit.>, and consist of a generator network G and a discriminator network D. Given a distribution, G is trained to generate samples that are resembled from this distribution, while D is trained to distinguish whether the sample is genuine. They are trained in an adversarial fashion playing a min-max game against each other: Gmin DmaxV(D,G) = 𝔼_x ∼ p_data(x) [log D(x)] + 𝔼_x ∼ p_z(z) [log (1 - D(G(z)))] , where p_data is the target data distribution and z is drawn from a random noisedistribution p_z.Conditional GANs <cit.> are variants of GANs, where one is interested in directing the generation conditioned on some variables, e.g., labels in a dataset. It has the following form:Gmin DmaxV(D,G) = 𝔼_x ∼ p_data(x) [log D(x|y)] + 𝔼_x ∼ p_z(z) [log (1 - D(G(z|y)))] , where the only difference from GANs is the introduction of y that represents the condition variable. This condition is passed to both the generator and the discriminator networks. One particular example is <cit.>, where they use conditional GANs to generate images conditioned on text captions. The text captions are encoded through a recurrent neural network as in <cit.>. In this paper, we use conditional GANs for cross-modal audio-visual generation. § CROSS-MODAL GENERATION MODEL The overall diagram of our model is shown in Fig. <ref>, where we have separate networks for Sound-to-Image (S2I) and Image-to-Sound (I2S). Each of them consists of three parts: an encoder network, a generator network, and a discriminator network. We describe the generator and discriminator networks in Sec. <ref>, and their training strategies in Sec. <ref>. We present the encoder networks for sound and image in Sec. <ref> and Sec. <ref>, respectively. §.§ Generator and Discriminator NetworksS2I Generator The S2I generator network is denoted as: G_S ↦ I: ℝ^|φ(A)|×ℝ^Z ↦ℝ^I. The sound encoding vector of size 128 is first compressed to a vector of size 64 via a fully connected layer followed by a leaky ReLU, which is denoted as φ(A). Then it is concatenated with a random noise vector z ∈ℝ^Z. The generator takes this concatenated vector and produces a synthetic image x̂_I G_S ↦ I(z,φ(A)) of size 64x64x3. S2I Discriminator The S2I discriminator network is denoted as: D_S ↦ I: ℝ^I×ℝ^|φ(A)|↦ [0,1]. It takes an image and a compressed sound encoding vector and produces a score for this pair being a genuine pair of image and sound. I2S Generator Similarly, the I2S generator network is denoted as: G_I ↦ S: ℝ^|ϕ(I)|×ℝ^Z ↦ℝ^A. The image encoding vector of size 128 is compressed to size 64 via a fully connected layer followed by a leaky ReLU, denoted as ϕ(I), and concatenated with a noise z. The generator takes it and do a forward pass to produce a synthetic sound spectrogram x̂_AG_I ↦ S(z,ϕ(I)) of size 128x34. I2S Discriminator The I2S discriminator network is denoted as: D_I ↦ S: ℝ^A ×ℝ^|ϕ(I)|↦ [0,1]. It takes a sound spectrogram and a compressed image encoding vector and produces a score for this pair being a genuine pair of sound and image. Our implementation is based on the GAN-CLS by Reed et al. <cit.>. We extend it to handle the challenges in operating sound spectrograms which have a rectangle size. For the I2S generator network, after getting a 32x32x128 feature map, we apply two successive deconvolution layers, where each has a kernel of size 4x4 with stride 2x1 and 1x1 zero-padding, and obtain a matrix of size 128x34. We apply the numpy resize function to get a matrix of size 128x44 for comparing with ground-truth spectrogram in evaluation. The I2S discriminator network takes sound spectrogram of size 128x34. To handle ground-truth spectrogram, we resize it from 128x44 to 128x34.We apply two successive convolution layers, where each has a kernel of size 4x4 with stride 2x1 and 1x1 zero-padding. This results in a 32x32 square feature map. In practice, we have observed that adding more convolution layers in the I2S networks helps get better output in fewer epochs. We add two layers to the generator network and 12 layers to the discriminator network. §.§ Adversarial Training Strategies Without loss of generality, we assume that the training set contains pairs of images and sounds {(I_i^j, A_i^j)}, where I_i^j represents the jth image of theith instrument category in our dataset and A_i^j represents the corresponding sound. Here, i∈{1,2,3…,13} represents the index to one of the music instruments in our dataset, e.g., cello or violin. Notice that even images and sounds within the same music instrument category differ in terms of the player, pose, and music note. We use 𝐈_-i to represent the set of all images of instruments of all the categories except the ith category, and use 𝐈^-j_i to represent the set of all images in the ith instrument category except the jth image. The sound counterparts, 𝐀_-i and 𝐀^-j_i, are defined likewise.Based on the input, we define three kinds of discriminator outputs: S_r, S_f and S_w. Here, S_r is the score for a true pair of image and sound that is contained in our training set, and S_f is the score for the pair where one modality is generated based on the other modality, and S_w is the score for the wrong pair of image and sound. Wrong pairs are sampled from the training dataset. The generator network is trained to maximize: log(S_f) , and the discriminator is trained to maximize: log(S_r) + (log(1-S_w) + log(1-S_f))/2 . Notice that by using different types of wrong pairs, we can eventually guide the generator in solving various tasks.S2I Generation (Instrument-Oriented) We train a single S2I model over the entire dataset so that it can generate musical performance images of different instruments from different input sounds. In other words, the same model can generate an image of person-playing-violin from an unheard sound of violin, and can generate an image of person-playing-saxophone from an unheard sound of saxophone. We apply the following training settings:x̂_I G_S ↦ I (φ(A_i^j),z) S_f= D_S ↦ I (x̂_I, φ(A_i^j)) S_r= D_S ↦ I (I_i^j, φ(A_i^j)) S_w= D_S ↦ I (ω(𝐈_-i), φ(A_i^j)) , where x̂_I is the synthetic image of size 64x64x3, z is the random noise vector and φ(A_i^j) is the compressed sound encoding. ω(·) is a random sampler with a uniform distribution, and it samples images from the wrong instrument category to construct wrong pairs for calculating S_w. We use the sound-to-image network structure as in Fig. <ref> (a).S2I Generation (Pose-Oriented) We train a set of S2I models with one for each music instrument category. Each model captures the relations between different human poses and input sounds of one instrument. For example, the model trained on violin image-sound pairs can generate a series of images of person-playing-violin with different hand movements according to different violin sounds. This is a fine-grained generation task compared to the previous instrument-oriented task. We apply the following training settings: x̂_I G_S ↦ I (φ(A_i^j),z) S_f= D_S ↦ I (x̂_I, φ(A_i^j)) S_r= D_S ↦ I (I_i^j, φ(A_i^j)) S_w= D_S ↦ I (ω(𝐈^-j_i), φ(A_i^j)) ,where the main difference from Eq. (<ref>) is that here in constructing the wrong pairs we sample images from wrong images in the correct instrument category, 𝐈^-j_i, instead of images in wrong instrument categories, 𝐈_-i. Again, we use the network structure as in Fig. <ref> (a). I2S Generation We train a single I2S model over the entire dataset so that it can generate sound magnitude spectrograms of different instruments from different musical performance images. In other words, the same model can geneFor example, the model generates a sound spectrogram of drum given an image that has a person playing drum. The generator should not make mistakes on the type of instruments while generating convertible spectrogram to realistic sounds. In this case, we set the training as following:x̂_A G_I ↦ S (ϕ(I_i^j),z) S_f= D_I ↦ S (x̂_A,ϕ(I_i^j)) S_r= D_I ↦ S (A_i^j,ϕ(I_i^j)) S_w= D_I ↦ S (ω(𝐀_-i),ϕ(I_i^j)) .Recall that x̂_A is the generated sound spectrogram with size 128x34, and ϕ(I_i^j) is the compressed image encoding. We use the image-to-sound network as in Fig. <ref> (b).§.§ Sound Encoder Network The sound files are sampled at 44,100 Hz. To encode sound, we first transform the raw audio waveform into the time-frequency or time-quefrency domain. We explore a set of representations including the Short-Time Fourier Transform (STFT), Constant-Q Transform (CQT), Mel-Frequency Cepstral Coefficients (MFCC), Mel-Spectrum (MS) and Log-amplitude of Mel-Spectrum (LMS). Figure <ref> shows images of the above-mentioned representations for the same sound. We can see that LMS shows clearer patterns than other representations.We further run a CNN-based classifier on these different representations. We use four convolutional layers and three fully connected layers (see Fig. <ref>). In order to prevent overfitting, we add penalties (l2 = 0.015) on layer parameters in fully connected layers, and we apply dropout (0.7 and 0.8 respectively) to the last two layers. The classification accuracies obtained by different representations are shown in Table <ref>. We can see that LMS shows the highest accuracy. Therefore, we chose LMS over other representations as the input to the audio encoder network. Furthermore, LMS is smaller in size as compared to STFT, which saves the running time. Finally, we feed the output of the FC layer (size: 1x128) of CNNs classifier to GAN network as audio feature.Further merit of LMS is detailed in the experiment section. We thus choose LMS to represent the audio. To calculate LMS, a Short-Time Fourier Transform (STFT) with a 2048-point FFT window with a 512-point hop size is first applied to the waveform to get the linear-amplitude linear-frequency spectrogram. Then a mel-filter bank is applied to warp the frequency scale into the mel-scale, and the linear amplitude is converted to the logarithmic scale as well. §.§ Image Encoder Network For encoding images, we train a CNN with six convolutional layers and three fully connected layers (see Fig. <ref>). All the convolution kernels are of size 3x3. The last layer is used for classification with softmax loss. This CNN image classifier achieves a high accuracy of more than 95 percent on the testing set. After the network is trained, its last layer is removed, and the feature vector of the second to the last layer having size 128 is used as the image encoding in our GAN network.§ DATASETS To the best of our knowledge, there is no existing dataset that we can directly work on. Therefore, we compose two novel datasets to train and evaluate our models, and they are a Subset of URMP (Sub-URMP) dataset and a ImageNet Image-Sound (INIS) dataset.Sub-URMP dataset is composed from the original URMP dataset <cit.>. It contains 13 music instrument categories. In each category, there are recorded videos of 1 to 5 persons playing different music pieces (see Fig. <ref>). We separate videos into 80% for training and 20% for testing and ensure that a video will not appear in both training and testing sets. We segment the videos into small chunks with a 0.5 second duration. We use the first frame in each chunk to represent the matching image of the audio. We calculate the loudness (Γ, unit: dBFS) for all audio chunks using the formula Γ=20*log_10(|ψ|/max(ψ)), where ψ is the matrix after loading wave file into numpy array. We set a threshold (Θ = -45 dBFS) and delete chunks having Γ≤Θ. Finally, there are a total of 17,555 sound-image pairs in our composed Sub-URMP dataset. The basic information is shown as Table <ref>. We use this dataset as our main dataset to evaluate models in Sec. <ref>.All images in the INIS dataset are collected form ImageNet, shown in Fig. <ref>. There are five categories, and each contains roughly 1200 images. In order to eliminate noise, all images are screened manually. Audio files of this dataset come form a total of 77 solo performances downloaded from the Internet, such as a piano performance of the Moonlight Sonata and a violin performance of the Preludio. We sample 7200 small audio chunks from all songs with each having 0.5 second duration. We match the audio chunks to the instrument images to create manual sound-image pairs. Table <ref> shows the statistics of this dataset.§ EXPERIMENTSWe first introduce our model variations in Sec. <ref>. Then we present our evaluation on instrument-oriented Sound-to-Image (S2I) generation in Sec. <ref>, pose-oriented S2I generation in Sec. <ref> and Image-to-Sound (I2S) generation in Sec. <ref>. §.§ Model VariationsWe have three variations for our sound-to-image network. S2I-C network This is our main sound-to-image network that uses classification-based sound encoding. The model is described in Sec. <ref>.S2I-N network This model is a variation of the S2I-C network. It uses the same sound encoding but it is trained without the mismatch S_w information (see Eq. <ref>).S2I-A network This model is a variation of the S2I-C network and differs in that it uses autoencoder-based sound encoding. Here, we use a stacked convolution-deconvolution autoencoder to encode sound. We use four stacks. For the first three stacks, we apply convolution and deconvolution, where the output of convolution is given as input to the next layer in stacks. In the last stack, the input (a 2D array of shape 120x36) is flattened and projected to a vector of size 128 via a fully connected layer. The network is trained to minimize MSE for all stacks in order. §.§ Evaluating Instrument-Oriented S2I GenerationWe show qualitative examples in Fig. <ref> for S2I generation. It can be seen that the quality of the images generated by S2I-C is better than its variations. This is because the classifier is explicitly trained to classify the instruments in sound. Therefore, when this encoding is given as a condition to the generator network, it faces less ambiguity in deciding what to generate. Furthermore, while training the classifier, we observe the classification accuracy, which is a direct measurement of how discriminative the encoding is. This is not true in the case of autoencoder. We know the loss function value, but we do not know if it is a good condition feature in our conditional GANs. §.§.§ Human EvaluationWe have human subjects evaluate our sound-to-image generation. They are given 10 sets of images for each instrument. Each set contains four images; they are generated by S2I-C, S2I-N and S2I-A and a ground-truth image to calibrate the scores. Human subjects are well-informed about the music instrument category of the image sets. However they are not aware the mapping between images to methods. They are asked to score the images on a scale of 0 to 3, where the meaning of each score is given in Table. <ref>.Figure <ref> shows the results of human evaluation. More than half of all images generated by S2I-C are considered as realistic by our human subjects, i.e. getting score 2 or 3. One third of them get score 3. This is much higher than S2I-N and S2I-A. In terms of mean score, S2I-C gets 1.81 where the ground-truth gets 2.59 due to small size; all images are evaluated at size 64x64. Images from three instruments in particular were rated of very high score among all images generated by S2I-C. Out of 30 Cello images, 18 received highest score of 3, while 25 received scores of 2 or above. Cello images received an average score of 1.9. Out of 30 Flute images, 15 received highest possible score of 3, while 24 received a score of 2 or above. Flute images also received an average score of 2.1. Out of 30 Double-Bass images, 18 received score 3, while 21 got a score of 2 or more. The average score that Double-Bass images got was 2.02.§.§.§ Classification Evaluation We use the classifier used for encoding images (see Fig. <ref>) for evaluating our generated images. When classifying real images, the accuracy of classifier is above 95%, thus we decide to use this classifier (Γ) to verify whether the generated (fake) images are classifiable and they belong to the expected instrument categories. We calculate the accuracies on images generated by S2I-C, S2I-A and S2I-N. Table. <ref> shows the results. It shows that the accuracy of S2I-A and S2I-N is far worse than the accuracy of S2I-C.§.§.§ Evolution of Classification Accuracy.Figure <ref> shows the classification accuracy on images generated in both training set and testing set. It is plotted for every fifth epoch. The model used for plotting this figure is our main S2I-C network. We visualize generated images for a few key moments in the figure. It shows that the accuracy increases rapidly up till the 35th epoch, and then begins to fall sharply till the 50th epoch, after which it again picks up a little, although the accuracy is still much lower than the peak accuracy. The training and testing accuracies follow nearly the same trend.One potential reason is that the discriminator loses both classification power and the power to tell fake images apart around epoch 50. Thereafter, it recovers the ability to tell fake images, although not its discriminator power—the slightly higher accuracy is a result of generating the same image with minor variations for all the input audios—thus at least some are classified correctly. This can be seen in the attached images. At epoch 50 we have a totally random looking image, while at 60 we can see that the Cello image looks like the Cello image in the dataset, while the other image, which was supposed to be Flute, looks like the Clarinet image from the dataset. Thus, while the images look like images from the dataset, they are not classified correctly. Hence we get a higher accuracy than bad images, but still not as high as correctly classified, high-quality images.It is interesting to note that even the fifth epoch has much higher training and testing accuracies than any epoch after 40. What this means is that, even after as few as 5 epochs, not only are the images getting aligned with the expected category, the generated images have enough quality that a classifier can extract distinguishing features from them. This is not true in the case of a random image like the ones in epoch 50. §.§ Evaluating Pose-Oriented S2I Generation The model and the training strategy for our pose-oriented S2I generation is described in Sec. <ref>. The results we got were encouraging: various poses can be observed in the generated images (see Fig. <ref>). Note that for sound encoding, we used the same image classifier as S2I-C. It is trained to classify various instruments, not various poses. With a classifier that is trained to classify music notes, we expect the results to better match the expected poses.§.§ Evaluating I2S GenerationWhen converting LMS back into waveform files, we will lose high frequency part as the Mel filter filtering is not reversible. Therefore, we conduct evaluation on generated sound spectrograms. We use the sound classifier (see Fig. <ref>) which is trained to encode sound for image generation. The reason we use this model is because the model is trained on real LMS, and achieves a high accuracy of 80% on testing set of real LMS. We achieve 11.17% classification accuracy on the generated LMS. One factor that might be affecting the accuracy is that we generate spectrogram of size 128x34 and resize them to 128x44 in classification. Furthermore, Figure <ref> shows the generated LMS compared to real LMS. We can see, in fake LMS, there is less energy in high frequency domain, more energy in low frequency domain, same as real LMS.§ CONCLUSION In this paper, we introduce the problem of cross-modal audio-visual generation and make the first attempt to use conditional GANs on intersensory generation. In order to evaluate our models, we compose two novel datasets, i.e., Sub-URMP and INIS. Our experiments demonstrate that our model can, indeed, generate one modality (visual/audio) from the other modality (audio/visual) to a good extent at both instrument-level and pose-level. For example, our model is able to generate pose of a cello player given the note that is being played. Limitation and Future Work. While our I2S model generates LMS, the accuracy is low. Furthermore, it would be worthwhile to hire experts to listen to the ground-truth audio waveform files reconstructed from the generated LMS spectrograms. On the other hand, we are able to generate various poses using our S2I network, but it is hard to quantify how good the generation is. Strengthening the Autoencoder would enable accurate unsupervised generation. The present autoencoder appears to be limited in terms of extracting good representations. It is our future work to explore all these directions. ACM-Reference-Format
http://arxiv.org/abs/1704.08292v1
{ "authors": [ "Lele Chen", "Sudhanshu Srivastava", "Zhiyao Duan", "Chenliang Xu" ], "categories": [ "cs.CV", "cs.MM", "cs.SD" ], "primary_category": "cs.CV", "published": "20170426184610", "title": "Deep Cross-Modal Audio-Visual Generation" }
Short title Tripathi et al. e-mail , Phone: +43-01-4277-72855Faculty of Physics, University of Vienna, Boltzmanngasse 5, Vienna, 1090, Austria XXXX, revised XXXX, accepted XXXXXXXXSurface impurities and contamination often seriously degrade the properties of two-dimensional materials such as graphene. To remove contamination, thermal annealing is commonly used. We present a comparative analysis of annealing treatments in air and in vacuum, both ex situ and "pre-situ", where an ultra-high vacuum treatment chamber is directly connected to an aberration-corrected scanning transmission electron microscope. While ex situ treatments do remove contamination, it is challenging to obtain atomically clean surfaces after ambient transfer. However, pre-situ cleaning with radiative or laser heating appears reliable and well suited to clean graphene without undue damage to its structure. []abstract_figure Pre-situ annealing of typical dirty graphene samples yields atomically clean areas several hundred nm^2 in size.Cleaning graphene: comparing heat treatments in air and in vacuumMukesh Tripathi, Andreas Mittelberger, Kimmo Mustonen, Clemens Mangler, Jani Kotakoski, Jannik C. Meyer and Toma Susi December 30, 2023 ========================================================================================================================================§ INTRODUCTIONGraphene <cit.> has attracted considerable attention due to its excellent intrinsic properties, leading to many potential applications including DNA translocation <cit.>, nanoelectronic devices <cit.>, and sensors <cit.>. Chemical vapor deposition (CVD) allows large area graphene to be synthesized scalably and in high-yield on transition metal surfaces, from which polymers such as poly methyl methacrylate (PMMA) are used to transfer it onto target substrates <cit.>. To dissolve PMMA after transfer, organic solvents like acetone, chloroform and acetic acid are commonly used  <cit.>. However, none of these solvents are able to completely dissolve PMMA, and a thin layer of polymeric residues are left absorbed on the surfaces <cit.>. This is a major drawback of polymer-assisted transfer and can degrade the electronic properties of graphene by introducing unintentional doping and charge impurity scattering <cit.>. In addition, hydrocarbon impurities are directly absorbed from the atmosphere onto the surface, and mobile contamination may be pinned into place by the electron beam <cit.>. This makes atomic level characterization by electron microscopy and electron energy loss spectroscopy <cit.> difficult, not to mention more ambitious goals such as single-atom manipulation <cit.>.To clean graphene, several methods have been reported. Conventional thermal annealing is optimized by varying the treatment temperature in air <cit.>, in vacuum <cit.> and in gas environments such as Ar/H_2 <cit.>, CO_2 <cit.> or N_2 <cit.>. Moreover, vacuum annealing at higher temperature for shorter times, i.e. rapid-thermal annealing  <cit.>, has been successfully used to remove surface contamination. Several other approaches such as dry-cleaning with activated carbon <cit.>, wet chemical treatment using chloroform <cit.>, and deposition of metal catalyst and subsequent annealing <cit.> have been studied. However, adsorbents or chemicals also leave residues, and depositing metal will affect transport and other properties. Non-chemical routes such as mechanical cleaning using contact mode atomic force microscope <cit.> or plasma treatment <cit.> have been employed, but have limited ability to remove contamination over large areas.In this work, we analyze and compare the effectiveness of heat treatments in air and in vacuum to clean graphene. We investigate its relative cleanliness after ex situ annealing in air on a hot plate or in a vacuum chamber. We further demonstrate a new, effective and reliable cleaning approach using black body radiative or laser-induced heating in vacuum. In this "pre-situ cleaning", the sample is annealed in the same vacuum system as the characterization equipment, to which it is transferred without exposure to the ambient. While this is a standard technique for surface science, it has until now not been possible to combine it with electron microscopy. To study the effectiveness of methods used, our samples were characterized using low acceleration voltage transmission electron microscopy (TEM) and atomic resolution aberration-corrected scanning transmission electron microscopy (STEM). We find that while ex situ treatments do remove contamination, when effective they also cause significant damage. Only with the pre-situ method was it possible achieve large areas of atomically clean graphene.§ EXPERIMENTALCommercially available CVD-grown monolayer graphene suspended on Quantifoil TEM grids from Graphenea Inc. was used for the experiments. All ex situ samples were characterized using a bench-top low acceleration voltage transmission electron microscope (LVEM5, 5 kV). Selected ex situ and all pre-situ samples were characterized at high resolution using the aberration-corrected scanning transmission electron microscope Nion UltraSTEM100 operated at 60 kV (with a standard 12 h 130 ^∘C vacuum bake before insertion into the microscope, apart from the radiatively heated samples inserted via a separate airlock). All presented images have been treated with a Gaussian filter and colored with the ImageJ lookup table "fire" to highlight the relevant details.We used two ex situ cleaning techniques: air and vacuum annealing. In air, samples were heated on a hot plate between 300–500 ^∘C for times ranging from 15 min to 1 h. Vacuum annealing was carried out in a vacuum evaporator (Korvus Technology) at a pressure of 10^-6 Torr. TEM grids were inserted into the vacuum chamber in a ceramic bucket wrapped with a resistive coil and a thermocouple placed inside to measure the temperature.For pre-situ annealing, we likewise used two techniques: laser annealing and radiative heating. For laser annealing, a high power diode laser (tunable up to 6 W) was aimed through a viewport at the sample held in the parked pneumatic transfer arm. The samples were iteratively treated with increasing laser power until cleaning was observed, leading to good results with 600 mW (10 % duty cycle) for 2 min. The laser spot was ∼1 mm^2 in size and the distance between the laser source and sample was ∼40 cm. The power must be carefully controlled since at higher power the laser will destroy the sample. Radiative heating was effected by a tungsten (W) wire that can be resistively heated to very high temperatures, mounted in a vacuum chamber attached to the microscope. Distance between the wire and sample was ∼2–3 mm and the treatment time was 15 min. Again, the wire power was iteratively increased until cleaning was observed, yielding good results for a current of 7 A, corresponding to a thermal power of 64 W and a wire temperature of ∼1750 K. The vacuum level for both pre-situ methods was ∼10^-8 Torr. For imaging, the samples were transferred into the Nion UltraSTEM without exposure to air.§ RESULTS AND DISCUSSIONFigure 1 shows low voltage TEM images of suspended monolayer graphene after annealing in air at temperatures between 400–500 ^∘C (treatment at lower temperatures does not yield larger clean areas, even if contamination layers are thinner). After air treatment at 400 ^∘C for 1 h, structural damage starts to emerge, but residues have not been much affected as shown in Fig. <ref>a and b. By increasing the temperature to 450 ^∘C for 30 min, tearing of graphene sheets becomes more frequent and the concentration of impurities is reduced as illustrated in Fig. <ref>c and d. However, significant contamination still remains. At 500 ^∘C for 15 min, crack formation is evident almost everywhere on the sample, while the density of residues decreases further as shown in Fig. <ref>e and f. At the same time, some contamination regions appear to be thicker after the treatment. A two-step treatment of washing the sample with aqueous acetonitrile and baking in air did not show additional effect. Thus, air annealing at high temperatures does help in removing residues, but severe damage occurs in the suspended graphene regions, presumably assisted by the etching of grain boundaries.In vacuum, graphene can withstand significantly higher temperatures. Fig. <ref> shows TEM and STEM images of graphene annealed between 600–750 ^∘C (heated at a rate of 10 ^∘C/min and cooled to room temperature in N_2) and subsequently characterized for cleanliness. The TEM images in Fig. 2a of a sample heated to 600 ^∘C show that contaminants are covering the surface, with small clean spots no larger than a few tens of nm^2. After thermal treatment at 650 ^∘C for 15 min, surface contamination was reduced (Fig. <ref>b). However, long treatments at high temperature start to cause crack formation even in vacuum. We further increased the annealing temperature to 750 ^∘C but reduced the time to only 3 minutes, and observed that many contaminants had been removed (Fig. <ref>c). We also found apparently almost fully clean areas, apart from some remaining chains of impurities as shown in Fig. <ref>c. However, even this short treatment time resulted in severe tearing of the suspended graphene. While etching should be suppressed in vacuum, it may be that mismatch in the thermal expansion coefficients of graphene and the substrate causes severe mechanical stress that leads to the tearing.To verify the cleaning, we imaged the 750 ^∘C sample at higher resolution in the STEM. The medium angular annular dark field (MAADF) image of Fig. <ref>d shows the large clean-looking areas, and some chain-like impurity patterns. Since contrast in annular dark field (ADF) STEM is directly proportional to the atomic number and the number of atoms in the beam path <cit.>, the bright spots are possibly heavier elements such as gold particles from the gold support grid that have become mobile at high temperature. However, at higher magnifications, we observed that a thin layer of contamination is still covering the regions that appear clean at lower resolution. Furthermore, the square bright areas in Fig. <ref>e were caused by mobile contamination pinned onto the surface by the electron beam. These findings may be explained by the highly lipophilic nature of graphene: a thin layer of contamination quickly adsorbs on the surface when graphene is exposed to the ambient  <cit.>. Alternatively, the contaminants may not be desorbed by the treatments, but merely swept aside into larger aggregates, only to diffuse back afterwards.To quantify the effect of cleaning, in Fig. <ref> we plot the integrated intensity measured over several hundred nm^2 of graphene (normalized by the vacuum level to account for differences in beam focusing) for air and vacuum annealing at different temperatures. For both treatments, the integrated intensity approaches unity with increasing temperature, indicating a decrease of impurity concentration as contaminants on the surface diffuse away or are desorbed. Since we used different treatment times at different temperatures, we also calculated the time integral of the thermal energy per mole ("thermal action"), defined as S_th = N_A k_B T t,where N_A is the Avogadro constant, k_B the Boltzmann constant, T the temperature in Kelvin, and t the treatment time. From the plot in Fig. <ref> we see that relatively shorter treatments are required at higher temperature for the same or even better cleaning effect. This corroborates the effectiveness of rapid-thermal annealing.To clean graphene using pre-situ annealing in a custom-built vacuum chamber attached to column of the STEM, we made use of both radiative energy transfer from a resistively heated W wire and from a high power laser aimed at the sample. In the case of radiative heating, current and voltage were controlled using a lab power supply, and in both cases the sample was transferred for observation without breaking the vacuum. The MAADF images in Fig. <ref>a and b show clean graphene after W wire heating. Surface contaminants is greatly reduced and large uniformly clean graphene regions are obtained as shown in Fig. <ref>b. Results of the laser cleaning are shown in Fig. <ref>c and d. Contaminants are mostly eliminated by the laser treatment, resulting in atomically clean areas of several hundred nm^2 (the MAADF image in Fig. <ref>e shows an example of the atomically clean lattice). Interestingly, while we observed mobile contamination pinning under the beam, in most cases this occurred only when the field of view contained pre-existing contamination or other defects.§ CONCLUSIONSIn conclusion, we have compared heat treatments to clean graphene in air and in vacuum. We clearly show that air annealing is not a good method: contamination remains on the surface, and severe damage occurs at higher temperatures where the treatment is more effective. Annealing at higher temperatures in vacuum is more effective in removing surface contaminants, but some seem to readsorb upon exposure to an air ambient. This issue can be overcome with pre-situ annealing via radiative or laser-induced heating in the same vacuum system as the electron microscope. These methods appear to be reliable and controllable for cleaning graphene and potentially other 2D crystals. However, caution must be taken in selecting the treatment time and the laser or thermal power to avoid destroying the sample. With optimal parameters, large areas of atomically clean graphene can be easily obtained.M.T. and T.S. acknowledge funding by the Austrian Science Fund (FWF) via project P 28322-N36 and J.K. by the Wiener Wissenschafts Forschungs- und Technologiefonds (WWTF) via project MA14-009. A.M., K.M., C.M., and J.C.M. acknowledge funding from the European Research Council (ERC) Grant No. 336453-PICOMAT. K.M. acknowledges financial support from the Finnish Foundations’ Post Doc Pool.[10]Novoselov20004ScienceK. S. Novoselov,A. K. Geim,S. V. Morozov, D. Jiang,Y. Zhang,S. V. Dubonos, I. V. Grigorieva,andA. A. FirsovElectric Field Effect in Atomically Thin Carbon Films,Science 306(5696), 666–669 (2004). Merchant2010nanolett.C. A. Merchant,K. Healy,M. Wanunu, V. Ray,N. Peterman,J. Bartel,M. D. Fischbein,K. Venta,Z. Luo,A. T. C. Johnson,andM. DrndićDNA Translocation through Graphene Nanopores,Nano Letters 10(8), 2915–2921 (2010). Becerril2008ACSnanoH. A. Becerril,J. Mao,Z. Liu,R. M. Stoltenberg,Z. Bao,andY. ChenEvaluation of Solution-Processed Reduced Graphene Oxide Films as Transparent Conductors,ACS Nano 2(3), 463–470 (2008). Schedin2007NatmaterF. Schedin,A. K. Geim,S. V. Morozov, E. W. Hill,P. Blake,M. I. Katsnelson,and K. S. NovoselovDetection of individual gas molecules adsorbed on graphene,Nat Mater 6(9), 652–655 (2007). Li2009ScienceX. Li,W. Cai,J. An,S. Kim, J. Nah,D. Yang,R. Piner, A. Velamakanni,I. Jung,E. Tutuc, S. K. Banerjee,L. Colombo,andR. S. RuoffLarge-Area Synthesis of High-Quality and Uniform Graphene Films on Copper Foils,Science 324(5932), 1312–1314 (2009). Cheng2011NanolettersZ. Cheng,Q. Zhou,C. Wang,Q. Li, C. Wang,andY. FangToward Intrinsic Graphene Surfaces: A Systematic Study on Thermal Annealing and Wet-Chemical Treatment of SiO2-Supported Graphene Devices,Nano Letters 11(2), 767–771 (2011). Her2013PhylettAM. Her,R. Beams,andL. NovotnyGraphene transfer with reduced residue,Physics Letters A 377(21–22), 1455–1458 (2013). Lin2011ACSNANOY. C. Lin,C. Jin,J. C. Lee,S. F. Jen,K. Suenaga,andP. W. ChiuClean Transfer of Graphene for Isolation and Suspension,ACS Nano 5(3), 2362–2368 (2011). Pirkle2011APLA. Pirkle,J. Chan,A. Venugopal, D. Hinojos,C. W. Magnuson,S. McDonnell, L. Colombo,E. M. Vogel,R. S. Ruoff,and R. M. WallaceThe effect of chemical residues on the physical and electrical properties of chemical vapor deposited graphene transferred to SiO2,Applied Physics Letters 99(12), 122108 (2011). Meyer08APLJ. C. Meyer,C. O. Girit,M. F. Crommie,and A. ZettlHydrocarbon lithography on graphene membranes,Applied Physics Letters 92(12), 123110 (2008). Susi172DMT. Susi,T. P. Hardcastle,H. Hofsäss, A. Mittelberger,T. J. Pennycook,C. Mangler, R. Drummond-Brydson,A. J. Scott,J. C. Meyer,andJ. KotakoskiSingle-atom spectroscopy of phosphorus dopants implanted into graphene,2D Materials 4(2), 021013 (2017). Susi15FWFT. SusiHeteroatom quantum corrals and nanoplasmonics in graphene (HeQuCoG),Research Ideas and Outcomes 1(12), e7479 (2015). Susi17UMT. Susi,J. Meyer,andJ. KotakoskiManipulating low-dimensional materials down to the level of single atoms with electron irradiation,Ultramicroscopy in press (2017). Xie2015CarbonW. Xie,L. T. Weng,K. M. Ng,C. K. Chan,andC. M. ChanClean graphene surface through high temperature annealing,Carbon 94, 740–748 (2015). Wang2017Chem.ofMater.X. Wang,A. Dolocan,H. Chou,L. Tao, A. Dick,D. Akinwande,andC. G. WillsonDirect Observation of Poly(Methyl Methacrylate) Removal from a Graphene Surface,Chemistry of Materials 29(5), 2033–2039 (2017). Lin2012Nanolett.Y. C. Lin,C. C. Lu,C. H. Yeh, C. Jin,K. Suenaga,andP. W. ChiuGraphene Annealing: How Clean Can It Be?,Nano Letters 12(1), 414–419 (2012). Ni2010Jour.oframanspectroscopyZ. H. Ni,H. M. Wang,Z. Q. Luo, Y. Y. Wang,T. Yu,Y. H. Wu,and Z. X. ShenThe effect of vacuum annealing on graphene,Journal of Raman Spectroscopy 41(5), 479–483 (2010). W.choi2015IEEEW. Choi,Y. S. Seo,J. Y. Park,K. B. Kim,J. Jung,N. Lee,Y. Seo,and S. HongEffect of annealing in ar #x002f;h2 environment on chemical vapor deposition-grown graphene transferred with poly (methyl methacrylate),IEEE Transactions on Nanotechnology 14(1), 70–74 (2015). Ahn16Mater.ExpressY. Ahn,J. Kim,S. Ganorkar,Y. H. Kim,andS. I. KimThermal annealing of graphene to remove polymer residues,Materials Express 6(1) (2016). Gong2013JournofphychemC. Gong,H. C. Floresca,D. Hinojos, S. McDonnell,X. Qin,Y. Hao, S. Jandhyala,G. Mordi,J. Kim, L. Colombo,R. S. Ruoff,M. J. Kim, K. Cho,R. M. Wallace,andY. J. ChabalRapid Selective Etching of PMMA Residues from Transferred Graphene by Carbon Dioxide,The Journal of Physical Chemistry C 117(44), 23000–23008 (2013). Jang2013NanotechC. W. Jang,J. H. Kim,J. M. Kim, D. H. Shin,S. Kim,andS. H. ChoiRapid-thermal-annealing surface treatment for restoring the intrinsic properties of graphene field-effect transistors,Nanotechnology 24(40), 405301 (2013). Algara-Siller14APLG. Algara-Siller,O. Lehtinen,A. Turchanin,andU. KaiserDry-cleaning of graphene,Applied Physics Letters 104(15), 153115 (2014). Longchamp2013JourofvacscienceJ. N. Longchamp,C. Escher,andH. W. FinkUltraclean freestanding graphene by platinum-metal catalysis,Journal of Vacuum Science & Technology B, Nanotechnology and Microelectronics: Materials, Processing, Measurement, and Phenomena 31(2), 20605 (2013). Goossens2012APLA. M. Goossens,V. E. Calado,A. Barreiro, K. Watanabe,T. Taniguchi,andL. M. K. VandersypenMechanical cleaning of graphene,Applied Physics Letters 100(7), 73110 (2012). Ferrah2016surfandinterfaceanaly.D. Ferrah,O. Renault,C. Petit-Etienne, H. Okuno,C. Berne,V. Bouchiat,and G. CungeXps investigations of graphene surface cleaning using h2- and cl2-based inductively coupled plasma,Surface and Interface Analysis 48(7), 451–455 (2016), SIA-15-0411.R1. Krivanek2010natureO. L. Krivanek,M. F. Chisholm,V. Nicolosi, T. J. Pennycook,G. J. Corbin,N. Dellby, M. F. Murfitt,C. S. Own,Z. S. Szilagyi, M. P. Oxley,S. T. Pantelides,andS. J. PennycookAtom-by-atom structural and chemical analysis by annular dark-field electron microscopy,Nature 464(7288), 571–574 (2010). Booth2008NanolettersT. J. Booth,P. Blake,R. R. Nair, D. Jiang,E. W. Hill,U. Bangert, A. Bleloch,M. Gass,K. S. Novoselov, M. I. Katsnelson,andA. K. GeimMacroscopic Graphene Membranes and Their Extraordinary Stiffness,Nano Letters 8(8), 2442–2446 (2008).
http://arxiv.org/abs/1704.08038v1
{ "authors": [ "Mukesh Tripathi", "Andreas Mittelberger", "Kimmo Mustonen", "Clemens Mangler", "Jani Kotakoski", "Jannik C. Meyer", "Toma Susi" ], "categories": [ "cond-mat.mtrl-sci" ], "primary_category": "cond-mat.mtrl-sci", "published": "20170426094041", "title": "Cleaning graphene: comparing heat treatments in air and in vacuum" }
A New Class of Nonlinear Precoders for Hardware Efficient MassiveMIMO SystemsMohammad A. Sedaghat, Ali Bereyhi, Ralf R. Müller^* Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen, Germany Emails: {mohammad.sedaghat, ali.bereyhi, ralf.r.mueller}@fau.de ==================================================================================================================================================================================================A general class of nonlinear Least Square Error (LSE) precoders in multi-user multiple-input multiple-output systems is analyzed using the replica method from statistical mechanics. A single cell downlink channel with N transmit antennas at the base station and K single-antenna users is considered. The data symbols are assumed to be iid Gaussian and the precoded symbols on each transmit antenna are restricted to be chosen from a predefined set X. The set X encloses several well-known constraints in wireless communications including signals with peak power, constant envelope signals and finite constellations such as Phase Shift Keying (PSK).We determine the asymptotic distortion of the LSE precoder under both the Replica Symmetry (RS) and the one step Replica Symmetry Breaking (1-RSB) assumptions. For the case of peak power constraint on each transmit antenna, our analyses under the RS assumption show that the LSE precoder can reduce the peak to average power ratio to 3dB without any significant performance loss. For PSK constellations, as N/K grows, the RS assumption fails to predict the performance accurately and therefore, investigations under the 1-RSB assumption are further considered. The results show that the 1-RSB assumption is more accurate. § INTRODUCTION Massive Multiple-Input Multiple-Output (MIMO) is among the main technologies for the next generation of wireless networks <cit.>. In massive MIMO systems, multi-antenna base stations utilize precoding techniques to focus the power to desired users. This shifts a large portion of the system's overall processing to base stations which in contrast to power-limited users, do not face a computing power constraint.So far, several precoding schemes have been proposed including linear and nonlinear schemes. Linear schemes mainly consist of Match Filtering (MF), Zero Forcing (ZF) and Regularized Zero Forcing (RZF), where in practice each of them could be preferred regarding the desired tradeoff between the complexity and performance <cit.>. As examples of nonlinear schemes, one names Tomlinson-Harashima <cit.> and vector precoding <cit.>. Regarding data support, precoding schemes can be designed for finite or infinite input alphabets. In finite alphabet cases, the users data symbols are chosen from a finite and countable set, e.g., Phase Shift Keying (PSK) constellation <cit.>. In this paper, we consider a nonlinear Least Square Error (LSE) precoder in which the signal on each transmit antenna is restricted to be chosen from a predefined set.The main motivation for investigating the LSE precoder is to model a large variety of signal constraints at massive MIMO base stations which allow us to use more efficient hardware. As an example, the LSE precoder can be designed to fulfill an instantaneous peak power constraint on each transmit antenna which avoids clipping at power amplifiers. Furthermore, the LSE precoder are able to fix the envelope of transmitted signals to increase the power efficiency of power amplifiers which was initially investigated in <cit.>. The LSE precoder also enable us to have finite alphabet signals on antennas[Note that the constraints considered in this paper are on the signals on the antennas and this is different than the case of having constraint on the input data signal of users.]which are required in the recently introduced Load Modulated Single-RF (LMSRF) MIMO transmitters <cit.>. In LMSRF, the signal on each antenna is taken from a discrete constellation due to using limited number of switches . Considering a large number of transmit antennas, we study the LSE precoder in the asymptotic regime. Our performance analyses are based on the replica method introduced in the context of statistical mechanics. The LSE precoder is analyzed in frequency-flat fading channels and it is shown that the same performance is achieved in frequency-selective fading channels utilizing Orthogonal Frequency Division Multiplexing (OFDM). Both the Replica Symmetry (RS) and one-step Replica Symmetry Breaking (1-RSB) assumptions are applied under some known signal constraints such as signals with peak and average power constraints, constant envelope signals and PSK signals. It is shown that in the case of peak power constraint, the RS prediction is consistent with the numerical results, although in some cases it might not be exact. However, the RS assumption does not give an accurate solution for Binary Shift Keying (BPSK) and Quadrature Phase Shift Keying (QPSK) signals on the transmit antennas and 1-RSB improves the prediction in these cases. Notation: We use bold lowercase letters for vectors and bold uppercase letters for matrices. Conjugate transpose of a matrix is denoted by ·^†, the transpose itself is shown by ·^ T.Moreover, the complex set is shown by C. F_b(·) denotes the cumulative distribution function (cdf) of b and the Kronecker product is shown by ⊗. The real and imaginary parts are denoted byand , respectively, andrepresents the mathematical expectation. Furthermore, we define z1/π^-|z|^2 z and Vec(A) to be the vector obtained by stacking the columns of A. § PROBLEM FORMULATIONLet's consider the general problem of designing a precoder for a massive MIMO system with N transmit antennas and K single-antenna users. The channel is assumed to be a frequency-flat fading channel. The generalization to the case of frequency-selective fading channels and OFDM signal is presented in Appendix A where we show that the same result holds for the case of frequency-selective channels. The inter-cell interference is neglected and it is assumed that the channel matrix is perfectly known at the base station. Let u∈C^K and H∈C^K× N be the data vector of the users and the channel matrix, respectively, and v∈X^N denotes the precoded vector signal where X is a predefined set. The received vector at the user terminals readsy=Hv+n,where y=[y_1,⋯,y_K]^ with y_i being the received signal at the ith user terminal, and n being the zero mean additive white Gaussian noise vector whose elements have the variance of σ_n^2. We define the non-linear LSE precoder with the following rule v=_x∈X^NHx-√(γ)u^2+λx^2,where γ is a positive constant and λ is a tuning parameter (Lagrange multiplier) controlling the total transmit power. In case of X=C, our nonlinear LSE precoding scheme reduces to the linear schemev=√(γ)H^†(HH^† +λI)^-1u which is known as the regularized zero-forcing precoder <cit.>. The precoding procedure, however, does not take a simple form for a general X. As an example, consider the LMSRF MIMO transmitter which chooses a finite constellation with respect to the number of discrete load modulators' states <cit.>. Another example is the case of constant envelope precoding on each antenna <cit.> where| v_i| ^2=P ∀ i ∈{1,⋯,N}.Here, the Peak to Average Power Ratio (PAPR) is small, around 3 or 4 decibels depending the pulse shaping filter. In these cases, the classical tools fail to analyze the optimization problem in (<ref>). We therefore invoke the replica method developed in statistical mechanics to determine the large system performance of the precoder by calculating the asymptotic distortion defined as𝖣 =lim_K↑∞1/KHv-√(γ)u^2 when the inverse load factor defined as αN/K, is kept fixed. Our analysis determines the asymptotic distortion defined in (<ref>) without finding the explicit solution of the optimization problem (<ref>). This allows us to have an estimate of the best performance in order to have a reference performance measure for comparing the practical algorithms. Throughout the analysis, we set the data symbols of the users to be independent and identically distributed (iid) Gaussian, i.e., u∼𝒞𝒩(0,σ_u^2I).The asymptotic distortion measure 𝖣 can be used to derive a lower bound for the ergodic achievable rate of the users in the downlink channel as follows. Let R_i be the ergodic achievable rate of the ith user. A lower bound for the ergodic rate R_i is obtained when we impose the worst case scenario for the interference at each user terminal which implies the Gaussian distributed interference at each user terminal. Note that this is true only in the case of Gaussian distributed input signals. Then, we obtain the following bound on the average ergodic rate of the users1/K∑_i=1^K R_i≥1/K∑_i=1^K_Hlog(1+γσ_u^2/σ_n^2+I_i(H)),where I_i(H) is the interference power at the ith user terminal. Using Jensen's inequality and the fact that the function log(1+γσ_u^2/σ_n^2+x) is convex, we obtain1/K∑_i=1^KR_i≥log(1+γσ_u^2/σ_n^2+1/K∑_i=1^K _HI_i(H)).It is easy to show that 1/K∑_i=1^K _HI_i(H)=𝖣 for K→∞, and therefore, 1/K∑_i=1^KR_i≥log(1+γσ_u^2/σ_n^2+𝖣). In the case that users have symmetry, e.g., when the users are uniformly distributed in an area, it is easy to show that the ergodic rate of each user is also larger than log(1+γσ_u^2/σ_n^2+𝖣). In this paper we use the replica method to analyze the LSE precoding in a very general case. The replica method also known as the “replica trick” is a non-rigorous method developed for asymptotic analysis in statistical mechanics. The method has been rigorously justified in some few cases, e.g., for the system whose matrix has a semicircular eigenvalue distribution. Furthermore, it has been shown that the replica method gives valid predictions for several known problems, and thus it is widely employed for large system analysis of communication systems <cit.>. § LARGE SYSTEM ANALYSISDefine RH^†H. We start our analysis by determining 𝖣̆lim_K↑∞1/Kmin_x∈X^NHx-√(γ)u^2 + λx^2which reads 𝖣̆=γσ_u^2+lim_K↑∞1/Kmin_x∈X^N𝗀(x)with the function 𝗀(x) being defined as𝗀(x)x^†R x-2√(γ){x^†H^†u}+ λx^†x.Using Varadhan's theorem, one can writemin_x∈X^N𝗀 (x) =-lim_β↑∞1/βlog∑_x∈X^N^-β𝗀 (x).From (<ref>) and (<ref>), the evaluation of 𝖣̆ needs a logarithmic expectation to be determined which is not a trivial task. Thus, we use log(t) =lim_n↓ 0∂/∂ nlog t^n ,for some positive random variable t.Consequently, we have 𝖣̆ = γσ_u^2 -lim_K,β↑∞1/β Klim_n↓ 0∂/∂ nlog[∑_x∈X^N^-β𝗀(x)]^n= γσ_u^2 -lim_β↑∞1/βlim_n↓ 0∂/∂ nΞ_n,where Ξ_n denotes the corresponding term in the first line. Here, the replica method suggests us to consider the replica continuity assumption and do the further analysis as follows: First, determine Ξ_n for an integer n, and then, assume that the analytic continuation of Ξ_n onto the real line gives the same result. For details about the validity of this assumption, the reader is referred to <cit.>. Thus, we obtainΞ_n=lim_K↑∞1/Klog∑_{x_a }^-β∑_a=1^n 𝗀(x_a)where {x_a }denotes the replicas .Using the independency of u and H, the expectations over u and H separate, and thus, the summation on the right hand side of (<ref>) reduces to∑_{x_a }_H^-β∑_a=1^n[x_a^†Rx_a+ λx_a^†x_a] +β^2γσ_u^2 ∑_a=1^n Hx_a^2 By defining the matrix V as V1/N[ x_1,⋯,x_n ] Γ[ x_1,⋯,x_n ]^† with Γ being an n× n matrix with entries ζ_i,j -βγσ_u^2 +δ_i,j,Ξ_n is found as Ξ_n =lim_K↑∞1/Klog∑_{x_a }^-βλ∑_a=1^nx_a^†x_a_H^-β N ( R V). Suppose that the empirical distribution of the eigenvalues of R converges to a deterministic distribution, and denote the corresponding cdf with F_R(λ). The Stieltjes transform of the distribution F_R(λ) is defined as G_R(s)= (λ-s)^-1. The corresponding R-transform is then defined as _R(w)=G_R^-1(w)-w^-1where G_R^-1(w) denotes the inverse with respect to composition. Noting that the expectation in (<ref>) is a spherical integral, we use the results from <cit.> which state_H^-β N ( R V)=^-N ∑_i=1^N ∫_0^βλ̃_i_R(-w) w ,as N ↑∞ with λ̃_1,⋯, λ̃_N being the eigenvalues of V. The matrix V has only n nonzero eigenvalues which are equal to the nonzero eigenvalues of G=1/N Γ[x_1,⋯,x_n]^†[x_1,⋯,x_n].Consequently, denoting the eigenvalues of G by λ_1,⋯, λ_n, _H^-β N ( R V)=^-N( ∑_i=1^n ∫_0^βλ_i_R(-w) w+ϵ_N ),where ϵ_N tends to zero when N↑∞. In order to find Ξ_n, we need to sum over the Nn-dimensional space. We determine the summation by taking the same approach as in <cit.>. We split the space into the subshells𝒮(Q){x_1,⋯,x_n|x_a^†x_b=NQ_ab}where Q_ab is the (a,b)th entry of the matrixQ=1/N [x_1,⋯,x_n]^†[x_1,⋯,x_n].Substituting in (<ref>), Ξ_n reduces toΞ_n= lim_K↑∞1/Klog∫^Nℐ(Q)^-N𝒢(Q)𝒟Q, where the function 𝒢(Q) is defined as𝒢(Q)βλ∑_a=1^nx_a^†x_a/N+∑_i=1^n ∫_0^βλ_i_R(-w) w ,and ^Nℐ(Q) is the Jacobian of the integral; moreover, we define D Q ∏_a=1^n ∏_b=a+1^ndQ_ab dQ_ab.In order to calculate the Jacobian term in (<ref>), we firstly writee^Nℐ( Q) =∫∏_a≤ bδ( [x_a^†x_b-NQ_ab]) ×           δ( [x_a^†x_b-N Q_ab]) ∏_a=1^n dF_ x(x_a).Then, we introduce a new matrix Q̃ in the complex frequency domain. Following the lines in <cit.> and defining 𝒥(t-j∞;t+j∞) for some t∈R, we obtain^Nℐ(Q)=∫_𝒥^n^2^-N[Q̃Q]+Nlogℳ(Q̃)𝒟Qwhere the function ℳ(Q̃) is defined asℳ(Q̃)∑_{x_a}^∑_a,bx_a^*x_bQ̃_ab.In the large system limit, the integration in (<ref>) is dominated by the integrand at the saddle point. In order to calculate the saddle point of the integrand, one needs to impose a structure on the matrices Q and Q̃. The most primary structure is imposed by the RS assumption. In the RS assumption, it is postulated that the saddle point matrices which dominate the integration in (<ref>) are invariant to permutation of the replica indices. Therefore, following <cit.>, under the RS assumption we set Q_a,b=q and Q̃_a,b=β^2 f^2 for all a≠ b and Q_a,a=q+χ/β and Q̃_a,a=β^2f^2-β e for all a where {q,χ,f,e} are some positive finite constants. Substituting in (<ref>), Ξ_n can be analytically calculated, and consequently, 𝖣̆ is determined accordingly. It is then straightforward to evaluate the asymptotic distortion 𝖣 explicitly. The final expression for the asymptotic distortion under the RS assumption is stated in Proposition <ref>. Under the RS assumption, the asymptotic distortion converges to𝖣=γσ_u^2+α∂/∂χ[ (q-χγσ^2_u)χ_R(-χ)],as K,N↑∞ and the inverse load factor α is kept fixed. q and χ are solutions to the following two coupled equationsχ=1/f∫_C_x∈X|z-_R(-χ)+λ/fx|z^*z q=∫_C|_x∈X|z-_R(-χ)+λ/f x | |^2zwhere f√((q-χγσ_u^2)^'_R(-χ)+γσ^2_u _R(-χ)).▪ Although the replica symmetry assumption has given the exact solution for some problems <cit.>, there are several examples in which this assumption fails to give valid prediction of the solution <cit.>. For these cases in order to have a more precise prediction, one needs to employ the r-step RSB assumption which imposes a more generalized structure on the matrices Q and Q̃. Here, we consider the 1-RSB assumption which postulates <cit.>Q =q_1 1_n+p_1I_nβ/μ_1⊗1_μ_1/β+χ_1/βI_n,Q̃ =β^2 f_1^2 1_n + β^2 g_1^2I_nβ/μ_1⊗1_μ_1/β -β e_1 I_n,where q_1,p_1,χ_1,μ_1,f_1,g_1,e_1 are non-negative real numbers, 1_n is an n× n matrix with all elements equal to 1 and I_n is the n× n identity matrix.With same steps as for Proposition <ref>, the asymptotic distortion is determined under the 1-RSB assumption as in the following proposition. Under the 1-RSB assumption, the asymptotic distortion converges to 𝖣= γσ_u^2-αχ_1/μ_1_R(-χ_1) + α[ q_1+η_1/μ_1-2γσ_u^2η_1]_R(-η_1) -αη_1[q_1-γσ_u^2η_1]_R^'(-η_1), as K,N↑∞ and the inverse load factor α is kept fixed. The set of scalars {q_1,p_1,χ_1,μ_1} is calculated through the coupled equationsη_1= 1/f_1∫∫{z^*|f_1z+g_1y-e_1x | }𝒴̃(y,z) zy,q_1+p_1=∫∫||f_1z+g_1y-e_1x | |^2𝒴̃(y,z) zy, η_1+μ_1q_1= 1/g_1∫∫{y^*|f_1z+g_1y-e_1x | }𝒴̃(y,z) zy,and∫_χ_1^η_1 _R(-w) w=∫log∫𝒴(y,z) yz-2χ_1_R(-χ_1)+(μ_1q_1+2η_1-2μ_1 η_1 γσ_u^2-2χ_1μ_1γσ_u^2)_R(-η_1)-2μ_1η_1(q_1-γσ_u^2η_1) _R^'(-η_1)+ λμ_1(p_1+q_1),where η_1=χ_1+μ_1p_1,𝒴(y,z)=^-μ_1 min_x∈Xe_1 |x|^2-2{x(f_1z^*+g_1y^*)},and the function 𝒴̃(y,z) being defined as𝒴̃(y,z)=𝒴(y,z)/∫_C𝒴(ỹ,z)ỹ.Moreover, the parameters {e_1,f_1,g_1} are determined ase_1=_R(-χ_1)+λf_1=√(γσ_u^2_R(-η_1) + (q_1-γσ_u^2η_1)_R^'(-η_1))g_1=√(_R(-χ_1)-_R(-η_1)/μ_1).▪ Note that letting p_1=0 reduces the 1-RSB solution to RS. This means that one of the 1-RSB solutions is always the RS solution.In fact, the coupled equations in both the RS and 1-RSB cases may have multiple solutions in which one of them is the valid saddle point of (<ref>). In this case, the saddle point is the solution which minimizes a function corresponding to the system, known as the free energy. It can be shown that the free energy of the nonlinear LSE precoder is -𝖣̆. This means that the saddle point is the solution which maximizes 𝖣̆. The explicit expression for 𝖣̆ can be found in terms of the replica parameters and is skipped here due to lack of space. § NUMERICAL RESULTSIn this section, we numerically investigate several examples of the nonlinear LSE precoder. Throughout our investigations, we consider σ_u^2=1 and a channel matrix whose entries are iid with variance 1/N. Note that Propositions <ref> and <ref> are given for a general channel matrix and are not restricted to the iid case. In Appendix A, we show that the results can be generalized to the case of frequency-selective channels and OFDM signal. For the sake of simplicity, we assume that all the users have the same path loss and consider the extension of the analysis for more general setups in the extended version of the paper. For an iid matrix, we have <cit.>_R(w)=α^-1/1- w.In the following, we consider two examples of massive MIMO systems with input constraint; namely transmitters with peak power constraint on each antenna and systems with PSK constellation on the transmit antennas. We determine the performance of the nonlinear LSE precoder using our replica results.§.§ Per-antenna peak and total average power constraints Consider a setup with constraints on the instantaneous power on each transmit antenna and total average power.The average power is set by choosing λ properly. In this case, X readsX={x=r^jθ|θ∈[0,2π], 0≤ r≤√(P)}where P is the instantaneous peak power. We invoke the RS solution for investigation. Using Proposition 1, it can be shown that the parameter q in the RS solution represents the average power per antenna. Moreover, the RS coupled equations becomeχ =√(α/q+γσ_u^2)( 1+χ) h, q =c^2[ 1-^-P/c^2]for h being defined ash =c-c^-P/c^2+√(P π) (√(2P)/c),and c denoted byc=√(α(q+γσ_u^2))/αλ (1+χ)+1.Furthermore, the asymptotic distortion is determined as 𝖣= q+γσ^2_u /(1+χ)^2.Fig. <ref> represents the asymptotic distortion versus the inverse load factor for a fixed total average power q=0.5, different PAPR defined as P/q and γ=1. To validate the results by the replica method, we have also plotted some simulation results obtained by CVX, a package for specifying and solving convex programs <cit.>, for K=200. It is observed that the analytical results are consistent with the simulation results although RS may not be exact in some cases. Furthermore, for PAPRs equal to or more than 3dB, the asymptotic distortion is sufficiently close to the case without peak power constraint.In order to describe the variation of the required average power for a fixed asymptotic distortion with respect to the number of transmit antennas, we consider the case with unit per-antenna peak power constraint and plot the average power per-antenna for given asymptotic distortions. The parameter γ is set to 1. The results are shown in Fig. <ref>. It is observed that the per-antenna average power decays by increasing α. By numerical curve fitting, it can be observed that the per-antenna average power converges to cα ^κ for some constants c and κ=-1 as α grows unbounded. For bounded α, however, κ<-1. For massive MIMO systems, i.e., α≫ 1, with average power constraint, it has been shown that when the base station has perfect channel state information, signal to interference plus noise ratio can be improved by a factor of α, asymptotically <cit.> which agrees the result given here for the peak power constraint.Next, the lower bound for the ergodic rate per user is investigated. The noise variance σ_n^2 and the average transmitted power q are set to 1. An important parameter in this case is γ. Increasing γ outperforms the received SNR but at the same time it also affects the power of interference. The numerical results show that there is an optimum value for γ for every α and the average transmitted power q. In Fig. <ref>, the lower bound for the ergodic rate per user is plotted versus the inverse load factor for different peak to average power constraints when the rate is optimized over γ. Note that although the replica method also predicts the results for α<1, but the valid system assumption here is α≥1 since the number of base station antennas should be larger than the number of users. The rate for different PAPRs are quit close. At around α=5, for the case of constant envelope signal we need about 20% more antennas to obtain the same performance as in the case of no peak power constraint. Further simulations for the case of constant envelope signal, which are not presented here due to space limitation, show that for α=5, about 1.3 dB more transmit power is required to get the same performance compared to the case of no peak power constraint. §.§ M-PSK signals on antennasLet's consider the case of which the signal on each transmit antenna is selected from M-PSK constellation. In this case, we haveX={^ jk2π/M| k=1,…, M}.The constant envelope constraint is obtained easily by letting M↑∞.Under the RS assumption, the unit per-antenna average power results in q=1 and the parameter χ readsχ^-1 = 2/M sin(π/M)√(π1+γσ^2_u/α)-1.Then, the RS prediction for the asymptotic distortion is 𝖣= 1+γσ^2_u /(1+χ)^2. Fig. <ref> shows the asymptotic distortion for BPSK and QPSK constellations. For the sake of comparison, a lower bound for the asymptotic distortion is also plotted[Due to lack of space, derivation of the lower bound is omitted here. ]. For a BPSK constellation, the simulation results using an integer programming algorithm is also plotted considering N=100. For the BPSK case, it is observed that the RS prediction starts to deviate from the simulation results as α increases. The RS prediction even violates the lower bound for α≥ 5. This observation clarifies the failure of the RS assumption in this case. To have a better approximation of the exact solution, we have also plotted the 1-RSB prediction in Fig. <ref>. The 1-RSB prediction meets RS for small α and deviates as α grows. However, the 1-RSB prediction also fails to approximate the simulation results and violates the lower bound for large α. This observation brings this conjecture into mind that the precise approximation of the asymptotic distortion is given by the infinite number of replica breaking steps. Similar results are observed for the QPSK constellation.Fig. <ref> illustrates the RS prediction for PSK constellations.The case of constant envelope signal is also plotted by letting M↑∞. For sake of comparison, we have shown the result for the unit peak power constraint as well. For M≥ 8, the results for M-PSK and the constant envelope constellations are sufficiently close. Furthermore, the constant envelope constellation and the unit peak power constraint give almost the same asymptotic distortion under the RS assumption.§ CONCLUSIONSThe asymptotic performance of the nonlinear LSE precoder was analyzed using the replica method. Under the RS assumption, the asymptotic distortion of the precoder takes a simple form. Based on the investigations, the RS assumption seems to give a valid approximation of the exact solution in the case of a peak power constraint on each antenna and constant envelope signals. For the case with peak power constraint, the numerical results show that the transmit signals with PAPR of about 3 dB perform sufficiently close to the case without peak power constraint. This plays a very important role in practice where low PAPR signals at the transmitter enable us to employ highly efficient nonlinear power amplifiers. The RS prediction for an M-PSK constellation, however, violates the theoretically rigorous bounds as the inverse load factor α, defined as the number of transmit antennas to the number of users, increases. This implies that further exploration based on RSB assumptions is necessary for accurately approximating the performance. We considered the 1-RSB assumption and observed that the solution predicts the simulation results for a larger interval of α.The results show that for the first time even 1-RSB can be unreliable in wireless communications and more RSB steps need to be considered. § GENEALIZATION TO FREQUENCY-SELECTIVE FADING CHANNELSLet L be the number of subcarriers and also assume that the channel is frequency-flat at each frequency sub-band. Furthermore, let H_j be the channel matrix at jth sub-band. The data input vector at the jth subcarrier is u_j. The LSE precoder in this case determines L vectors v_1,⋯,v_L to be given to the Inverse Fast Fourier Transform (IFFT) blocks as inputs. Let W be the IFFT matrix and v_ t Vec([v_1,…,v_L]^).The LSE precoder rule is v_ t=_W_ tx_ t∈X^NLH_ tx_ t-u_ t^2,where H_ t is a KL× NL matrix whose kth part of its (i-1)L+k columns is the ith column of the H_k and the remained entries are zero,u_ t[u_1^,…,u_L^]^ and W_ t is an LN× LN block-diagonal matrix whose L× L diagonal blocks are equal to W.One can reformulate (<ref>) as v_ t=_z_ t∈X^NLH_ tW_ t^†z_ t-u_ t^2,using the fact that W_ tW_ t^†=I. One can consider an equivalent frequency-flat fading channel with the channel matrix equal to H_ tW_ t^†. For the case that H_is are iid Gaussian matrices, Fig. <ref> compares the empirical cumulative distribution of the eigenvalues of R_ t=H_ t^†H_ t and R_j=H_j^†H_j numerically for L=32 and K=N=100. It is observed that the both cases have the same distribution. Since the result derived by the replica method depends only on the eigenvalue distribution of R_ t, this proves that the LSE precoder in this case has the same performance as in the case of frequency-flat fading channel.IEEEtran
http://arxiv.org/abs/1704.08469v1
{ "authors": [ "Mohammad A. Sedaghat", "Ali Bereyhi", "Ralf R. Müller" ], "categories": [ "cs.IT", "math.IT" ], "primary_category": "cs.IT", "published": "20170427080824", "title": "A New Class of Nonlinear Precoders for Hardware Efficient Massive MIMO Systems" }
U.S. Naval Research Laboratory, Code 6792, Plasma Physics Division, Nonlinear Systems Dynamics Section, Washington, DC 20375We study the extinction of long-lived epidemics on finite complex networks induced by intrinsic noise. Applying analytical techniques to the stochastic Susceptible-Infected-Susceptible model, we predict thedistribution of large fluctuations, the most probable, or optimal path through a network that leads to a disease-free state from an endemic state, and the average extinction time in general configurations. Our predictions agree with Monte-Carlo simulations on several networks, including synthetic weighted and degree-distributed networks with degree correlations, and an empirical high school contact network. In addition, our approach quantifies characteristic scaling patterns for the optimal path and distribution of large fluctuations, both near and away from the epidemic threshold, in networks with heterogeneous eigenvector centrality and degree distributions.89.75.Hc, 05.40.-a, 87.10.Mn, 87.19.X-Epidemic Extinction Paths in Complex Networks Ira B. Schwartz=============================================§INTRODUCTIONUnderstanding the dynamics of infectious processes in complex networks is an important problem, both in terms of generalizing concepts in statistical mechanics and applying them to public health <cit.>. A primary question in infectious disease modeling is how to control an outbreak, with the ultimate goal of reducing the number of individuals able to spread infection to zero. This process, by which an epidemic is extinguished, is called extinction or disease fade-out <cit.>. To understand and possibly achieve extinction, mathematical models can be useful, where extinction can be naturally captured in terms of a dynamical transition from an endemic state (e.g., fluctuating equilibrium or cycle) to a disease-free state <cit.>. Although it is known that random fluctuations are the cause of extinction in finite populations, the process of extinction does not happen in the deterministic systems analyzed in the vast majority of works on endemic dynamics in networks – where contacts between infectious and susceptible individuals are typically assumed to be well above an epidemic threshold or bifurcation point <cit.>. Consequently network-control prescriptions often reduce to bringing systems below a bifurcation point <cit.>. One may ask, is targeting sub-threshold regimes as a control method necessary or even optimal? In actuality the spread of disease is a highly stochastic process both in terms of the natural randomness inherent in contact processes and fluctuations due to time-varying and uncertain environments <cit.>. These stochastic effects make extinction inevitable, even above threshold, in finite networks and should be reflected in epidemic controls <cit.>. In fact, recent work has shown that optimal control of networks with noisy dynamics leverages randomness and a network's natural, noise-induced pathway between distinct states <cit.>. Continuing in this line of thinking, we seek a prescription for computing epidemic extinction pathways through complex networks. Such issues have received much attention in well-mixed and spatially homogeneous models <cit.>. It has been demonstrated in many works that noise and a system's dynamics can couple in such a way as to induce a large fluctuation – effectively driving a system from one state to another <cit.>. If the fluctuation is a rare event in the weak noise limit, then the process is captured by a path that is a maximum in probability, or optimal path (OP), where all others are exponentially less likely to occur. The formalism borrows from analytical mechanics, describing the OP as a least-action trajectory in some effectively classical system, and allows one to predict the dynamical extinction pathway and the average time needed to realize it <cit.>. Some recent works have made progress in understanding extinction in networks, (e.g., deriving bounds for average extinction times), but do not make use of the path-based formalism outlined here <cit.>. The following layout of the paper describes epidemic extinction through complex networks with intrinsic (demographic) noise in terms of large fluctuations and rare-event theory. Sec.<ref> constructs the formalism: combininga mean-field approximation for endemic dynamics on networks with a Wentzel–Kramers–Brillouin (WKB) technique that allows for an analytical description of the distribution of large fluctuations and the OP. The limiting form of the OP is discussed near the epidemic threshold in Sec.<ref>, and away from threshold in Sec.<ref>. Sec.<ref> addresses how to compute the OP, extinction time, and their dependencies in certain cases, including in networks with large spectral gaps (Sec.<ref>) and degree distributions (Sec.<ref>). Throughout, predictions are compared to real and synthetic network simulations.§LARGE FLUCTUATIONS, MEAN-FIELD,AND WKB APPROXIMATIONIn order to predict epidemic extinction in general contact networks it is necessary to consider an arbitrary weighted adjacency matrix, A, whereeach element, A_ij, represents the strength of a link, or contact, from node i to node j, in a graph with N nodes. Given this representation, a network's epidemic dynamics, assuming a simple Susceptible-Infectious-Susceptible Markov process (SIS) is captured by the states and transitions of its nodes; i.e., node i is either “infected", denoted ν_i=1, or “susceptible", ν_i=0. Furthermore, node i changes its state ν_i:0→1 with probability per unit time β(1-ν_i)∑_j A_ijν_j, and ν_i:1→0 with probability per unit time αν_i, where β and α are known as the infection and recovery rates, respectively <cit.>. Since the elements of A are proportional to probabilities, A is assumed to be nonnegative. It is important to note that there is inherent noise in the SIS model defined, which arises from the underlying stochastic reactions <cit.>. In order to analyze the stochastic dynamics, it is useful to consider an ensemble consisting of C identical networks with the same A, but independent realizations of the stochastic dynamics <cit.>. Each node can be specified by a graph position i, and ensemble number c, with state ν_i,c∈{0,1}. In this way, the number of infected nodes in the ensemble with graph position i is I_i=∑_cν_i,c, with corresponding transitions and rates: I_i→I_i+1 with rate R_i^+(I)≡β∑_c(1-ν_i,c)∑_j A_ijν_j,c=β∑_jA_ij∑_c(1-ν_i,c)ν_j,c, and I_i→I_i-1 with rate R_i^-(I)≡α I_i. To simplify our analysis, it is useful to make a mean-field approximation and replace ν_i,c by the ensemble average, I_i/C: R_i^+(I)≈β∑_jA_ijI_j(1-I_i/C), so that the transition rates depend explicitly on I=<I_1,I_2,...,I_N> alone. This approximation neglects correlations between neighboring graph positions <cit.>. Ultimately, we are interested in the limit of large C, so that x_i≡I_i/C gives a continuous fraction of infected nodes, or density, in graph position i. In this way, the large ensemble allows us to consider continuous densities even in discrete networks with unique graph positions.Given the stochastic reactions and rates R_i^+ and R_i^-,the ensemble dynamics is described by a probability distribution, P(I,t), satisfying a master equation:∂ P/∂ t(I,t)=∑_iR^+_i(I-1_i)P(I-1_i,t)-R^+_i(I)P(I,t) +R^-_i(I+1_i)P(I+1_i,t)-R^-_i(I)P(I,t),where 1_i=<0 _1,0 _2,...,1 _i, 0 _i+1,...>. Because extinctions in large networks (N≫1) with long-lived epidemics are rare events with small probabilities, we are interested in the tails of P(I,t), where I corresponds to a large deviation from the average behavior, and is accompanied by an exponential reduction in probability. This intuition suggests looking for solutions of Eq.(<ref>) with an exponential, or WKB form, P(I,t)=ae^-CS(x,t) <cit.>. The WKB solution for the ensemble distribution can be viewed as a product of independent and identical distributions for each realization in the ensemble. Hence, we can approximate the probability distribution of states for a single realization, ρ(ν,t), byρ(ν,t)≅ρ(x,t)=be^-S(x,t).Predictions from Eq.(<ref>) (when combined with Eqs.(<ref>-<ref>) below) are in good agreement with simulations on an empirical high school network <cit.>, shown in red in Fig.<ref>. We can find the leading contribution to P(I,t) by substituting the WKB ansatz into Eq.(<ref>), expanding in powers of the small parameter 1/C (e.g., S(x±1_i/C)≈S(x)±(1/C)∂ S/∂ x_i), and neglecting terms of 𝒪(1/C) or smaller, where C≫1. This approximation converts the master equation into a Hamilton-Jacobi equation (HJE):∂ S/∂ t+H(x,∂ S/∂x)=0,where S and H are called the Action and Hamiltonian, respectively. The latter is a function of the infection density at graph position i, x_i, and its conjugate momentum, p_i=∂ S/∂ x_i:H(x,p)=∑_i[β(1-x_i)(e^p_i-1)∑_jA_ijx_j+α x_i(e^-p_i-1)]. Just as in analytical mechanics, a convenient approach for solving the HJE is to solve Hamilton's equations of motion, ẋ_i=∂ H/∂ p_i and ṗ_i=-∂ H/∂ x_i:ẋ_i =β̃(1-x_i)e^p_i∑_jA_ijx_j-x_ie^-p_i, ṗ_i = β̃∑_jA_ijx_j(e^p_i-1)-A_ji(1-x_j)(e^p_j-1)-e^-p_i+1,expressed in terms of the ratio β̃≡β/α, and the time, τ≡α t. Crucially, solutions of the HJE extremize S, when expressed as the integralS(x,t)=∫_x(t=t_0)^xp·dx-∫_t_0^tH(x,p)dt', where x(t) and p(t) are determined from Eqs.(<ref>-<ref>)<cit.>. Because S is minimized, the probability of the corresponding trajectory is maximized– a consequence of the WKB approximation. Therefore, all that is needed to find the most probable path to extinction (OP), and ρ(x,t) (Eq.(<ref>) and Eq.(<ref>)) is the appropriate solution of Eqs.(<ref>-<ref>). Such solutions can be determined from boundary conditions, and computed as detailed in Sec.<ref>. From inspection of Fig.<ref> we notice that ρ(x,t) is a maximum at the endemic equilibrium (x=x^*), implying ∂ S/∂x=0 and a boundary condition: ẋ=0, ṗ=0, x=x^*, and p=0. Second, at the extinct state (x=0) the distribution has negative slope, ∂ S/∂x<0. Furthermore, ρ(x,t) is approximately time-independent, or quasi-stationary <cit.>. In the WKB ansatz, we have ∂ S/∂ t=H=0 ∀ t. Therefore the final boundary condition is ẋ=0, ṗ=0, x=0, and p=p^* <cit.>. OPs computed with Eqs.(<ref>-<ref>) and the stated boundary conditions are compared with stochastic trajectories ending in extinction for several networks in Fig.(<ref>).Two important details should be pointed out. Since the distribution is time-independent, and therefore “zero energy", S(x) is simply the line integral of the momentum along the OP, from Eq.(<ref>). Also, the p≡0 solution of Eqs.(<ref>-<ref>) gives the familiar quenched mean-field equations for SIS model on complex networks. Therefore, the WKB approach generalizes mean-field results to include large fluctuations.In general, by studying Eqs.(<ref>-<ref>) we can learn how a network's infection density is coupled to its large fluctuations – together generating the most likely transition sequence through a network leading to extinction. In addition to the distribution of large fluctuations, Eq.(<ref>), an important observable from the above formalism is the geometry of the OP; e.g., specifying the shape of infection density in the different graph positions as a network makes its way from a large epidemic to extinction. Examples are shown in Fig.<ref> and Fig.<ref>. Another important observable is the average extinction time for a given network, <T>, which is expected to take the form:<T>=B(β̃,A)e^S(x=0)/α,from the assumption that absorption into the extinct state has a rate, or inverse time, proportional to the probability <cit.>. For sufficiently large S, the exponential contribution dominates, and therefore <T>∼e^S(0) (as demonstrated in Fig.(<ref>) for several networks <cit.>). We note that beyond a theoretical interest, the framework presented can be augmented with control strategies designed to minimize the Action, Eq.(<ref>), thus producing exponential and optimal reductions in the lifetime of epidemics on networks <cit.>. §.§Near threshold behaviorSince the OPs are in a high 2N-dimensional space, they must be found by numerically solving the two-point boundary value problem in general, given Eqs.(<ref>-<ref>). However, analytic properties can be derived in certain limiting cases, which are useful for guiding intuition and for initializing algorithms (see Sec.(<ref>)). An important case discussed in this section is for β̃ just above the epidemic threshold, or transcritical bifurcation, β̃_c≡1/λ^(1) – where λ^(1) is the largest eigenvalue of A <cit.>. At this point the endemic and extinct state meet, and below which no long-lived epidemic occurs. In order to describe the path when β̃≳β̃_c, it is useful to assume that β̃=(1+δ)/λ^(1), with δ≪ 1, and first find the equilibria as functions of δ. Substituting the series x_i^*=∑_nδ^nx_i,n for each i and p=0 into dx/dτ=0, and collecting powers of δ (e.g., to 𝒪(δ^2)) gives:λ^(1) x_i,1 =∑_jA_ijx_j,1, λ^(1) x_i,2 =∑_jA_ij[x_j,2+(1-x_i,1)x_j,1]. Furthermore, by decomposing Eqs.(<ref>-<ref>) into the eigenbasis of A, x_i,n=∑_m=1^NG_n^(m)η_i^(m), where η^(m) is the mth right eigenvector of A with eigenvalue λ^(m), and taking the inner product, ∑_iζ_ix_i, of Eqs.(<ref>-<ref>) with the left eigenvector, ζ^(1), we find x^* to 𝒪(δ). A similar procedures gives p^*, when x=0:x_i^* = δη_i^(1)/∑_jζ^(1)_jη_j^(1)^2+𝒪(δ^2),p_i^* = -δζ_i^(1)/∑_jη^(1)_jζ_j^(1)^2+𝒪(δ^2)(assuming the normalization ∑_iζ^(1)_iη^(1)_i=1). Examining Eqs.(<ref>-<ref>), we see that x_i^* and p_i^* are proportional to the principal right and left eigenvectors of A near the bifurcation, respectively. In particular, if η^(1) and ζ^(1) contain relatively few nodes that are significantly large compared to most others, we expect the infection density and fluctuations to be localized around these nodes<cit.>.Further insight on the effects of topology near threshold can be gained by considering the Action along the path, Eq.(<ref>). To this end, it is useful to introduce a length parameter, a∈[0,1], so that we can express the coordinates as x_i(a)≈x_i^*(1-a) and p_i(a)≈p_i^*a, where the linear form is the simplest satisfying the boundary conditions to 𝒪(δ) (see Fig.<ref> insets). Integrating over the path gives the action near bifurcation, S(a)=∑_i∫_0^ap_i(a')da'(dx_i/da'):S(x(a))=δ^2(a-a^2/2)/∑_jζ^(1)_jη_j^(1)^2∑_lη^(1)_lζ_l^(1)^2+𝒪(δ^3). We note that Eq.(<ref>) is interesting, since the known expression for the complete graph is generalized by a topological factor that depends on the moments of the centrality distribution. Typically, as the distribution becomes broad, the topological factor in Eq.(<ref>) is reduced, such that the Action differs significantly from the limiting case, η_i=ζ_i=1/√(N)⇒ S=Nδ^2(a-a^2/2) <cit.>. This is intuitive, since for heterogeneous networks, infection is most prevalent around a comparatively small number of nodes, who must recover without reinfection in order for extinction to occur. The effects of heterogeneous eigenvector centrality are explored in more detail in Sec.<ref>.§ SPECIAL SOLUTIONSIn general, the OP is of interest away from threshold. However, since the OP is a heteroclinic connection of Eqs.(<ref>-<ref>), in practice it must be constructed numerically, e.g., through shooting, or quasi-newton methods, etc. For example, the paths shown in Fig.(<ref>) were found from an iterative action minimizing method (IAMM) <cit.>. In the IAMM, OPs are generated from a least-squares algorithm that minimizes the residuals between Eqs.(<ref>-<ref>) and finite-difference approximations. The boundary conditions specified in Sec.<ref> are used to close the differencing. Often the small δ limit, Eqs.(<ref>-<ref>), can be used as an initial guess. However the dimension for the minimization is 2Nd where d is the number of discrete points in the differencing and N is the size of the network, which is prohibitively large for large N. Therefore, in practice it is necessary to coarse-grain the network in some way.Two such approaches are discussed in the following sections for networks with large spectral gaps (Sec.<ref>) and specified degree distributions (Sec.<ref>).§.§Large spectral gapsIn general, A can be usefully expanded in terms of its eigenvalues and eigenvectors: A_ij=∑_n=1^Nλ^(n)η^(n)_iζ^(n)_j. Of particular interest, for strongly connected graphs, η^1 and ζ^1 are positive and unique, and λ^(1) is equal to the spectral radius, by the Perron-Frobenius theorem. Moreover, in many strongly connected networks of interest, it is the case that λ^(1)≫λ^(n), with large spectral gaps, and therefore, A_ij≈λ^(1)η^(1)_iζ^(1)_j. In such cases, A can be coarse-grained along a single dimension as demonstrated below.Given a large spectral gap, a simple coarse-graining is to bin η^(1), assuming a number of bins, B, and a distribution, f_b, for the number of nodes in a given bin, b. In the following, we assume that A is symmetric, so that η^(n)=ζ^(n). In this case, nodes can be ordered according to increasing η_l^(1). The binning procedures follows: starting with the first node, the first bin is filled with nodes sequentially until the number of nodes equals Nf_1; then, the second bin is filled, etc. Once all nodes are binned, the dimension of Eqs.(<ref>-<ref>) can be reduced by replacing η_l^(1),x_l, and p_l with their bin averages ∀ l∈b: η_b≡∑_l∈ bη_l^(1)/(Nf_b), x_b≡∑_l∈ bx_l/(Nf_b), p_b≡∑_l∈ bp_l/(Nf_b). This gives the following approximations to Eqs.(<ref>-<ref>) with reduced dimension 2B:ẋ_b=β̃λ^(1)η_b(1-x_b)e^p_b∑_b'Nf_b'η_b'x_b'-x_be^-p_b, ṗ_b=β̃λ^(1)η_b∑_b'Nf_b'η_b'[x_b'(e^p_b-1)-(1-x_b')(e^p_b'-1)]-e^-p_b+1. A final requirement is needed to ensure that the binned and original system have the same bifurcation point. We choose to renormalize η_b so that ∑_bη_b^2f_bN=∑_iη_i^2=1. The above procedure was applied to three networks (considered in Figs.<ref>-<ref>), where the bin distribution was assumed to be uniform for simplicity, f_b=1/B;B was chosen large enough so that the binned centralities closely matched the original (as in Fig.<ref>), but not so large to preclude using 200-1000 discretization points along the OP in the IAMM. The first network in Fig.<ref>(a) is a weighted Barabási-Albert graph (WBA) with N=500 and initial degree for each node, m=7 <cit.>. Every link was given a random weight, independently drawn from a uniform distribution over the range [0,10] after the network was generated from the standard Barabási-Albert algorithm (λ^(1)=122, λ^(2)=58.6). The second network in Fig.<ref>(b) is a high-resolution American high school contact network (HS) with 788 individuals and links representing close proximity interactions during the course of a day (measured using wireless motes <cit.>). Weights associated with links correspond to contact durations (λ^(1)=6715, λ^(2)=4882). Binning results for the WBA and HS networks are shown in Fig.<ref> with B=55 and B=30, respectively. The third network was generated from a configuration model, CM, with N=1000 and a degree distribution, g(k)=k^-2.5/∑_k'=20^500k'^-2.5, where k is the number of links for a given node (λ^(1)=80.9, λ^(2)=16.3)<cit.>; B=55 for the CM network's OP computation. A related method for computing OPs through networks with specified degree distributions is discussed in Sec.<ref>, and was applied to the networks in Fig.<ref>(c)-(d). §.§.§ Scaling with broad centrality distributionAs β̃ is increased above β̃_c, infection increases across the network. In particular, infection density can become high even at low centrality nodes. In this case the path to extinction through a network is more complex as its global dynamical structure becomes apparent. Example optimal paths for a WBA network when β̃≫β̃_c are shown in Fig.<ref>. We can see that a multi-step structure is visible in the relative change of infection density at different graph positions, which can be compared with the insets (β̃≳1/λ^(1)) <cit.>. Though the path has a more complicated form away from threshold, some characteristic scalings can be captured in this region of parameter space for networks with heterogeneous eigenvector centralities (e.g., f_b∼η_b^-γ) and where the large spectral gap approximation holds. Our approach in this section is to study the unstable and stable linear modes of (x,p) near the endemic and extinct states, respectively, for such networks. These modes approximate the OP (a heteroclinic connection) near the equilibria, and are useful for describing how large fluctuations depend on centrality<cit.>.With this end in mind, we consider the dynamics of (x_i,p_i)=(x_i^*+ϵ_i^o,μ_i^o) and (x_i,p_i)=(ϵ_i^in,p_i^*+μ_i^in), for small ϵ and μ, given A_ij≈λ^(1)η^(1)_iη^(1)_j (below, we drop the superscript (1) in η and λ for convenience). Similar to Sec.<ref>, when β̃≳β̃_c, it is straightforward to show that ϵ_i^o, ϵ_i^in, μ_i^o and μ_i^in are simply proportional to η_i. Fig.<ref> shows centrality scalings for the principal linear eigen-modes of Eqs.(<ref>-<ref>) near the equilibria. The upper dashed lines demonstrate the predicted scaling ϵ_i^in/ϵ_j^in∼η_i/η_j∼μ_i^o/μ_j^o for a WBA network where the dark blue/red curves correspond to β̃ increasingly close to threshold (ϵ_i^o and μ_i^in scale similarly in this region). However, as β̃ is increased, shown in light blue/red, we can see that the scaling changes significantly <cit.>. In order to understand the change in scaling as β̃ is increased, we first consider the equilibria x_i^* and p_i^*. Given the large spectral gap assumption, we find a simple form for each that is dependent on two parameters, X and P:x_i^* = Xβ̃λη_i/(1+Xβ̃λη_i),p_i^* =-ln[1+β̃λη_i(N<η>-P)] , satisfying: X=∑_jη_jx^*_j and P=∑_jη_je^p_j^*, where <η> is the average eigenvector centrality <cit.>. In particular, by assuming that infection densities are high in the endemic state at mostgraph positions, i.e., β̃λη_iN<η>≫1, then X≈N<η>-1/[β̃λ<η>] and P≈1/[β̃λ<η>]. Substituting these approximations into the linearized Eqs.(<ref>-<ref>) allows us to determine the dependence of the eigen-modes, (ϵ_i^o(t),μ_i^o(t))=e^tσ^o(ϵ_i^o,μ_i^o) and (ϵ_i^in(t),μ_i^in(t))=e^tσ^in(ϵ_i^in,μ_i^in), on η_i near the equilibria.Since infection densities are high near the endemic state away from threshold, we expect the most well connected nodes to be quickly reinfected after recovery, as compared to nodes that are less well connected. Therefore, we expect the OP out of the endemic state to correspond with an initial decrease in infection at low centrality positions. The scaling for this initial step is determined by the eigensolution of the linearized Eq.(<ref>) at (x_i^*,0): 1=∑_jβ̃λη_j^2(1-x^*_j)/[1+β̃λη_jX-σ^o],μ_i^o=β̃λη_i∑_jη_jμ_j^o(1-x_j^*)/[1+β̃λη_iX-σ^o].In particular, σ^o is positive and grows from zero with β̃>β̃_c <cit.>. When β̃λη_iN<η>≫1 and f_b∼η_b^-γ for large η, the summation in Eq.(<ref>) ∼∫η^-γ+1dη/(η-const.), and converges in the limit of large maximum centrality, η_max, when γ>2. This implies that σ^o does not depend sensitively on η_max in this region and therefore we can consider the limit of large centrality in Eq.(<ref>). By inspecting μ_i^o for large η_i, we see that it tends to a constant, i.e., μ_i^o/μ_j^o∼1, (since the sum over j is i-independent) in good agreement with numerical solutions of Eqs.(<ref>-<ref>) away from threshold– shown in Fig.<ref>(b)(light red). Interestingly, μ^o_i becomes largest for small η_i (light red) and increases quickly to a constant for large η_i. Since the momenta are nearly equal across nodes in the network, the Action's derivatives w.r.t. infection density are nearly equal across nodes. A similar procedure gives the scaling ϵ_i^o/ϵ_j^o∼η_j/η_i, which is found by expanding the linearized Eq.(<ref>) in 1/β̃λη_iN<η>, e.g., x_i^*≈1-1/β̃λη_iN<η> (Sec.<ref>). Therefore, the infection density at a given node decreases inversely proportional to its eigenvector centrality, i.e, the reciprocal scaling of the OP near threshold. Following the same approach, the scaling of the OP near the extinct state is determined by the eigen-solution of the linearized Eq.(<ref>) at (0,p_i^*):1=∑_jβ̃λη_j^2e^p_j^*/[σ^in+e^-p_j^*],ϵ_i^in=β̃λη_ie^p_i^*∑_jη_jϵ_j^in/[σ^in+e^-p_i^*].In particular, σ^in is negative, decreases from zero with β̃>β̃_c, and is similarly insensitive to large η_max. Hence, taking the limit of large η_i, given e^-p_i^*≈1+βλη_i N<η>[1-1/βλη_i N<η>], implies ϵ_i^in/ϵ_j^in∼η_j/η_i in Eq.(<ref>)– as found near the endemic state and shown in Fig.<ref>(b)(light blue). However, in contrast to the behavior near x_i^*,we find that μ_i^in increases with η_i, for small η_i, before reaching a constant, μ_i^in/μ_j^in∼1 (see details in Sec.<ref> and Fig.<ref>). The scalings near the extinct state imply that the last segment of the OP is coincident with a final recovery of residual infections at low-centrality nodes, while the momentum is largest at high centralities. Putting the scalings near the equilibria together, we can infer that infection density decreases rapidly in high-centrality graph positions at a boundary layer between the endemic and extinct states – since we have shown that their change is small compared to low centralities near the equilibria<cit.>. This can be seen in Fig.<ref>(a), where projections of the OP into high and low-centralities, on the x and y axes respectively, show a characteristic pattern in which segments with large horizontal slope occur between two segments with large vertical slope.§.§Degree distributionsIn addition to understanding the OP for a given network defined by A, it is useful to understand the qualitative structure of paths and Actions for networks with similar statistical properties <cit.>. A popular approach is to consider networks with a specified distribution for the fraction of nodes with k links, g(k) (where k is called the degree), which is the focus of this section. Often, additional information is stipulated, such as a degree-correlation function – typically in the form of a specified probability that a link starting from a node with degree k leads to a node with degree k', o(k' | k) <cit.>. OPs for networks with such properties can be found by approximating A given these distributions, and substituting into Eqs.(<ref>-<ref>).As is customary, we replace A_ij by its expectation value in the ensemble of simple networks with g(k) and o(k'|k), which is called the annealed network approximation. In particular, A_ij is approximated by the probability that nodes i and j are connected, A_ij≈o(k_j|k_i)k_i/Ng(k_j), or the probability that node i is connected to any node with degree k_j along a single link, multiplied by the number of possible links, and divided by the number of nodes with degree k_j <cit.>. Note that for link consistency, A_ij=A_ji, the distributions must satisfy the constraint: ko(k'|k)p(k)=k'o(k|k')p(k') <cit.>. With this substitution for A_ij into Eqs.(<ref>-<ref>), Hamilton's equations depend on the density of infection for nodes with the same degree k, x_k, and their momentum, p_k:ẋ_k= β̃k(1-x_k)e^p_k∑_k'o(k'|k)x_k'-x_ke^-p_k, ṗ_k= β̃k∑_jo(k'|k)[x_k'(e^p_k-1)-(1-x_k')(e^p_k'-1)]-e^-p_k+1.Notably, Eqs.(<ref>-<ref>) reduce to the heterogeneous mean-field dynamics for networks when p_k≡0;p_k≠0 entails extinction in degree-correlated topologies with dimension of (x,p) equal to twice the number of degree classes. The analysis and results for degree-distributed networks are analogous to Sec.<ref>-Sec.<ref>. For example, for degree-distributed networks the familiar proportionality of the Action on the number of nodes in A, is found from Eq.(<ref>) <cit.>: S(x)=N∑_kg(k)∫_x_k^*^x_kp_kdx'_k.Moreover, with the appropriate substitution of the largest eigenvalue, λ, and the corresponding right eigenvector, v_k, of ko(k'|k) in Eq.(<ref>) <cit.>, we find the Action at the extinct state for degree-correlated networks near the epidemic threshold:S=1/2Nδ^2<v^2>^3/<v^3>^2+𝒪(δ^3),where <v^n>=∑_kg(k)v_k^n. Extinction paths and times for two example networks are shown in Figs.<ref>-<ref> <cit.>. The networks have a bimodal degree-distribution with positive (PC) and negative (NC) degree-correlations (Figs.<ref>(c) and (d) respectively), where positive implies an increased probability relative to an uncorrelated network for nodes with similar degree to share an edge. Correlated bimodal networks can be constructed in a straightforward manner as detailed in Sec.<ref>. Fig.<ref> (c)-(d) shows OPs for example parameters computed from Eqs.(<ref>-<ref>), and projected into the densities of infected low and high-degree nodes. Qualitatively, we can see that OP projections into infection densities are significantly closer to lines with unit slope (which is the case for uncorrelated networks with small variance in k) in the NC case (d), than for the PC, (c). The change in the OP's shape with correlations suggests a reduction/enhancement of the effects of network heterogeneity with negative/positive correlations. For positive correlation, infection is more prevalent around high-degree nodes. This is reflected in the principal eigenvectors of ko(k'|k) for the two examples, where the low-degree component is 5.4 times greater in the NC (parameters in Fig.<ref>). In fact the topological factor, <v^2>^3/<v^3>^2, is 2.2 times greater for the NC, given the same N and distance to bifurcation, δ=β̃λ-1≳0. Therefore we expect the probabilities for large fluctuations to be smaller by the same power and extinction times to be larger by the same power. Equivalently, if comparing fixed extinction time, the PC must be taken to larger β̃λ and/or N. This is demonstrated in Fig.<ref>, where the largest times shown correspond to β̃λ=1.9 and N=400 for the NC, and β̃λ=2.8 and N=500 for the PC.The above example raises an interesting question of how fluctuations and extinction times vary with statistical properties in a network, such as degree-heterogeneity, which can be anticipated from the network Action. A more realistic class of heterogenous networks have power-law degree distributions, g(k)∼ k^-γ, where the level of degree-heterogeneity grows with decreasing γ. Fig.<ref> shows the predicted Actions at the extinct state as a function of γ for truncated and uncorrelated, o(k|k')=kg(k)/<k>, power-law distributions with several fixed distances to threshold. Interestingly, for such networks we can see that Actions vary as much as 60% when (β̃λ=2), with broader distributions resulting in significantly smaller Actions, and therefore exponentially larger probabilities of large fluctuations and exponentially smaller extinction times. The Action curveswere found by solving Eqs.(<ref>-<ref>) with the boundary conditions specified in Sec.<ref>, and computing Eq.(<ref>). In addition to computation, the lower black-curve in Fig.<ref> gives the analytic scaling near threshold, found from Eq.(<ref>), by substituting the eigenvector for uncorrelated random networks, v_k=k⇒ S=Nδ^2<k^2>^3/<k^3>^2 <cit.>. For the computed curves, it was useful to reduce the dimension for the IAMM by binning the distribution o(k|k')=kg(k)/<k> with a similar procedure as Sec.<ref>. Our approach, was to select a small bin width for kg(k)/<k> (e.g., 0.015), and sequentially add degree classes to a bin, starting with the smallest k and first bin, until the sum of kg(k)/<k> over k in a given bin equaled or exceeded the bin width. Then, the next bin was filled with the same bin width, etc. In the final step, degrees in Eqs.(<ref>-<ref>) were replaced by their bin's average and o(k'|k) by the sum over kg(k)/<k> in each bin <cit.>.§.§.§System-size scaling for modest NAnother interesting feature of extinction times in power-law networks concerns their scaling with system-size when there is no truncation in g(k). It is known for degree-homogeneous networks, such as simple-complete or Erdős-Rényi graphs, that the Action scales linearly with the system size <cit.>. Below we show that near threshold for modest N, a range of scalings are possible depending on the exponent, γ. As indicated above, the Action near threshold depends on a topological factor that is a function of the moments of g(k), which can depend on N.Here we continue to use the annealed network approximation, though for very large N this is known to break down for random networks with unbounded degree as localization effects become important <cit.>. Therefore, we restrict ourselves to N and minimum degree, k_min, such that λ≈<k^2>/<k>. When γ>4, <k^3> is finite for power-law networks, i.e., independent of N for large N, and thus S∼N near threshold, β̃≳<k>/<k^2> <cit.> – though higher-order terms may be N-dependent for δ≪1, we expect them to grow more slowly than N <cit.>. On the other hand, if γ<4, <k^3> is a function of the maximum degree, k_max, which follows a simple scaling: k_max∼ k_minN^1/[γ-1], for a finite network with minimum degree k_min <cit.>. The customary approach is to approximate the statistical moments of g(k) given k_max, allowing one to find the scaling of S with N. For example, when k_min≫1, the discrete sum, <k^3>=∑_kg(k)k^3≈C̃∫_k_min^k_maxk^3-γdk. Introducing γ=3+α with α∈(0,1), we get <k^3>≈C̃k_min^1-αN^[1-α]/[2+α]/[1-α], where C̃=k_min^2+α[2+α] (from normalization of g(k)). Computing the moments of g(k) in this way, gives the Action at the extinct state to 𝒪(δ^2) from Eq.(<ref>):S=δ^2/2(1-α)^2(2+α)/α^3N^1-2[1-α]/[2+α], The above suggests that in the heterogeneous mean-field approximation, the 𝒪(δ^2) contribution to the Action can increase sub-linearly in N for γ∈(3,4) near threshold <cit.> as suggested in Fig.<ref>. However for very large networks, and no truncation in k_max, eventually λ∼max{√(k_max),<k^2>/<k>}≫1, and the analysis presented is no longer valid, including the expansion in δ. Moreover, there is some evidence for multiple epidemic thresholds in networks with unbounded k_max as N→∞ <cit.>.Since such issues are not yet resolved, we leave the description of extinction in very large networks with unbounded degree distributions, and the crossover between localized and delocalized extinction for future study. § CONCLUSIONThis work dealt with the extinction of long-lived endemic states above epidemic thresholds on static finite networks with infection dynamics given by the stochastic SIS model. The optimal path to extinction (OP), the distribution of large fluctuations, and the average extinction time were computed by combining mean-field and WKB-approximation techniques. The path-based formalism presented enabled us to predict extinction in general networked populations, and extract several of its intriguing signatures in complex topologies, including the multistep scaling of the OP in networks with heterogeneous eigenvector centrality, as well as an increase in the probability of large fluctuations with increased topological heterogeneity. Although theoretical in nature, the generality of our approach allowed us to consider several applications, including weighted empirical and degree-correlated topologies.Though the results show good qualitative and quantitative agreement with Monte-Carlo simulations in both real and synthetic networks, improved accuracy can be achieved in a straightforward manner by following our synthesized prescription, namely: Using as an ansatz in a network's master equation the exponential function of an Action (typically requiring some accurate mean-field approximation), and taking a large system-size limit. The result is a Hamilton-Jacobi equation that generates a dynamical system with twice the dimension of the mean-field. The OP can be found by solving the two-point boundary value problem of Hamilton's equations of motion beginning at an endemic state and ending at an extinct state, which define OP endpoints. Thus the theory changes the stochastic analysis of large fluctuations in networks to one that may be analyzed using a deterministic formalism whose zero-fluctuation limit is a mean-field theory. Furthermore, our approach can be more generally applied to other questions concerning noise and network dynamics, such as epidemic extinction in adaptive networks, switching in social networks, network inference in the presence of large fluctuations, and optimal control of networks with fluctuating dynamics <cit.>.§ ACKNOWLEDGMENTSJ. H. is a National Research Council postdoctoral fellows. I.B.S was supported by the U.S. Naval Research Laboratory funding (N0001414WX00023) and office of Naval Research (N0001416WX00657) and (N0001416WX01643). We are very grateful to C. R. Myers and M. Assaf for useful discussions.§ APPENDIX AAs described in Sec.<ref>, ϵ_i^o and μ_i^in satisfy the linearized Eqs.(<ref>-<ref>). When β̃λη_iN<η>≫1, the approximate linear systems are:∑_jη_jϵ_j^o/N<η>≈ ϵ_i^o[σ^o+1+β̃λη_iN<η>(1-(1/β̃λ N<η>^2))]-μ_i^o[2-(1/β̃λ N<η>^2)-(1/β̃λη_i N<η>)]. [-1+((σ^in-1)/β̃λη_i N<η>)]μ_i^in≈ -∑_jη_jμ_j^in/N<η>1/β̃λη_jN<η>+∑_jη_jϵ_j^in/N<η>[-2+(1/β̃λη_iN<η>)+(1/β̃λη_jN<η>)].Since the sums in Eqs.(<ref>-<ref>) are independent of i, the limit of large β̃λη_iN<η>, gives: ϵ_i^o/ϵ_j^o∼η_j/η_i and μ_i^in/μ_j^in∼1. The latter can be seen in Fig.(<ref>). However, as β̃≫β̃_c the continuous spectra,σ^o_i and σ^in_i,for large N of the linearized Eqs.(<ref>-<ref>) become relevant: ϵ_i^in,μ_i^o∼δ(η-η_i), and σ^in_i =β̃λη_i^2e^p_i^*-e^-p_i^*, σ^o_i =1+β̃λη_i[∑_jη_jx_j^* -η_i(1-x_i)].This occurs as the denominators of Eq.(<ref>) and Eq.(<ref>) approach zero, and the single-mode analysis of Sec.<ref> is invalid. As a consequence, for very large β̃, the relevant modes directing the OP to extinction near the equillibria are extremely localized around low-centrality nodes.§ APPENDIX BCorrelated bimodal networks can be constructed as follows. We assume that a fraction, p, of the network has high-degree near k_2 while the remaining nodes have low-degree near k_1. To build such networks, high-degree nodes are connected to each other with probability k_2^2/[N<k>]+w, where w measures the assortativity above the uncorrelated construction and <k>=(k_1(1-p)+k_2p). On the other hand, high and low-degree nodes are connected with probability k_1k_2/[N<k>]-w, and low-degree nodes are connected with probability k_1^2/[N<k>]+w' – where w' is determined from the link-consistency constraint. In this way the degree distribution has two peaks centered around k_1 and k_2 as N→∞, and Eqs.(<ref>-<ref>) can be used to capture the OP and average extinction times assuming two degree classes with: o(k_2|k_2)=w+k_2p/<k>, o(k_1|k_2)= -w+k_1(1-p)/<k>, o(k_1|k_1)=-w'+k_1(1-p)/<k>, and o(k_2|k_1)=-w'+k_2p/<k>.9 Pastor R. Pastor-Satorras, C. Castellano, P. Van Mieghem, and A. Vespignani, Rev. Mod. Phys. 87, 925 (2015). Hindes1 J. Hindes and I. B. Schwartz, Phys. Rev. Lett. 117, 028302(2016). Anderson R. M. Anderson and R. M. May, Infectious Diseases of Humans (Oxford University Press, 1991). Dorogovtsev S. N. Dorogovtsev, A. V. Goltsev, and J. F. F. Mendes, Rev. Mod. Phys. 80, 1275 (2008). Vespignani1A. Barrat, M. Barthélemy, and A. Vespignani,Dynamical Processes on Complex Networks (Cambridge University Press, 2008). DoeringC.R. Doering, K. V. Sargsyan, and L. M. Sander, Multiscale Model. Simul. 3(2), 283 (2005). Schwartz M. I. Dykman, I. B. Schwartz, and A. S. Landsman, Phys. Rev. Lett. 101, 078101 (2008). Meerson2 A. Kamenev and B. Meerson, Phys. Rev. E. 77, 061107 (2008). Meerson3 O. Ovaskainen and B. Meerson, Trends Ecol. Evol. 25, 643 (2010). Assaf2 M. Assaf and B. Meerson, arXiv:1612.014702v2, (2016). Assaf1 M. Assaf and B. Meerson, Phys. Rev. E. 81, 021116 (2010). Nasell I. Nåsell, Extnction and Quasi-stationarity in the Stochastic Logistic SIS Model (Springer 2011).VaccRev Z. Wang, C. T. Bauch, S. Bhattacharyya, A. d'Onofrio, P. Manfredi, M. Perc, N. Perra, M. Salathé, and D. Zhao, Phys. Rep. 664, 1 (2016). Drakopoulos K. Drakopoulos, A. Ozdaglar, and J. N. Tsitsiklis, IEEE Trans. Netw. Sci. Eng. 1(2), 67 (2014).Meerson A. Kamenev, B. Meerson, and B. Shklovskii, Phys. Rev. Lett. 101, 268103 (2008). Schwartz2 I. B. Schwartz, E. Forgoston, S. Bianco, and L. B. Shaw, J. R. Soc. Interface 8, 1699 (2011). Billings L. Billings, L. Mier-y Teran Romero, B. S. Lindley, and I. B. Schwartz, PLoS One 8, e70211 (2013).Motter D. K. Wells, W. L. Kath, and A. E. Motter, Phys. Rev. X 5, 031036 (2015). Dykman M. Khasin and M. I. Dykman, Phys. Rev. Lett. 103, 068101 (2009). Kamenev V. Elgart and A. Kamenev, Phys. Rev. E. 70, 041106 (2004). Meerson4 B. Meerson and P. V. Sasorov, Phys. Rev. E. 84, 030101(R) (2011). Schwartz3 I. B. Schwartz, L. Billings, M. Dykman, A. S. Landsman J. Stat. Mech.: Theory Exp. P01005 (2009). Dykman2 M. I. Dykman, E. Mori, J. Ross, and P. M. Hunt, J. Chem. Phys. bf 100, 5735 (1994). Friedlin M. I. Friedlin and A.D. Wentzell, Random Perturbations of Dynamical Systems (Springer-Verlag, New York, 1998), 2nd ed..Durrett S. Chatterjee and R. Durrett, Annals of Probability 37, 2332 (2009). Mountford T. Mountford, D. Valesin, and Q. Yao, Electron. J. Probab., 18, 103 (2013). Van R. van de Bovenkamp and P. Van Mieghem, Phys. Rev. E 92, 032806 (2015). Munoz M. A. Muñoz, R. Juhász, C. Castellano, and G. Ódor, Phys. Rev. Lett. 105, 128701 (2010). Buono C. Buono, F. Vazquez, P. A. Macri, and L. A. Braunstein, Phys. Rev. E 88, 022813 (2013). Assaf3 M. Assaf and M. Mobilia, Phys. Rev. Lett. 109, 188701 (2012). Vespignani R. Pastor-Satorras and A. Vespignani, Phys. Rev. Lett. 86, 3200 (2001). Gillespie D. T. Gillespie, J. Comput. Phys. 22, 403 (1976). Ott2 G. Barlev, T. M. Antonsen, and E. Ott, Chaos 21, 025103 (2011). Mata A. S. Mata and S. C. Ferreira, Europhys. Lett. 103, 48003 (2013). Goltsev A. V. Goltsev, S. N. Dorogovtsev, J. G. Oliveira, and J. F. F. Mendes, Phys. Rev. Lett. 109, 128702 (2012).FN5 Techniques that include correlations can be used to significantly improve accuracy, but at the cost of higher dimensionality <cit.>.Lindley B. S. Lindley, L. B. Shaw, and I. B. Schwartz, Europhys. Lett. 108, 58008 (2014). FN1 The deviation from the expected scaling is likely due to the pre-factor dependence (B), the binning approximation (Sec.<ref>), and the inaccuracy of the mean-field assumption, particularly in the estimate for β̃_̃c̃ <cit.>.Lindley2 B. S. Lindley and I. B. Schwartz, Physica D 255, 22 (2013). Albert R. Albert and A.-L. Barabási, Rev. Mod. Phys. 74, 47 (2002).Salathe M. Salathé, M. Kazandjieva, J. W. Lee, P. Levis, M. W. Feldman, J. H. Jones, Proc. Natl. Acad. Sci. U.S.A. 107, 22020 (2010). Newman2 M. E. J. Newman, Networks: An Introduction (Oxford University Press, 2010). FN6 The usual stable and unstable modes of the endemic and disease-free states, respectively, (corresponding to p≡0) are not discussed in Sec.<ref>.FN2 As long as 1+β̃λη_jX-σ^o>0, and similarly for Eq.(<ref>). As β̃λ gets very large, these conditions can be violated and the analysis presented in Sec.<ref> must be supplemented (see Sec.<ref>). FN3 Given the monotonic dynamics of the path, an upper bound for the scaling near the boundary layer is dx_i/dx_j∼η_i/η_j <cit.>. Pastor2 R. Pastor-Satorras and A. Vespignani, Phys. Rev. E 65, 036104 (2002).Pastor3 M. Boguñá and R. Pastor-Satorras, Phys. Rev. E 66, 047104 (2002). Colizza E. Valdano, L. Ferreri, C. Poletto, V. Colizza, Phys. Rev. X 5, 021005 (2015). Hindes4 J. Hindes, K. Szwaykowska, and I. B. Schwartz, Phys. Rev. E 94, 032306 (2016). FN4 Due to the relatively small network sizes in Figs.<ref>-<ref>, the mean degrees for high and low-degree nodes in the bimodal networks (Sec.<ref>) differ from the assumed values of 50 and 5. This introduces a quantitative error, which can be reduced by replacing k_1 and k_2 inpredictions by the measured mean degrees for the two node types. Cohen R. Cohen, K. Erez, D.ben-Avraham, and S. Havlin, Phys. Rev. Lett. 85, 4626 (2000). Mata2 A. S. Mata and S. C. Ferreira, Phys. Rev. E 91, 012816 (2015).
http://arxiv.org/abs/1704.08626v1
{ "authors": [ "Jason Hindes", "Ira B. Schwartz" ], "categories": [ "physics.soc-ph", "cond-mat.dis-nn" ], "primary_category": "physics.soc-ph", "published": "20170427154558", "title": "Epidemic Extinction Paths in Complex Networks" }
1]Nikolai Gorbushin 1]Gennaro Vitucci [1]Department of Mathematics, IMPACS, Aberystwyth University, UK 2]Grigory Volkov [2]Department of Theory of Elasticity, St. Petersburg State University, Russia 1]Gennady Mishuris Influence of fracture criteria on dynamic fracture propagation in a discrete chain [ ==================================================================================§ ABSTRACTThe extent to which time-dependent fracture criteria affect the dynamic behavior of fracture in a discrete structure is discussed in this work. The simplest case of a semi-infinite isotropic chain of oscillators has been studied. Two history-dependent criteria are compared to the classical one of threshold elongation for linear bonds. The results show that steady-state regimes can be reached in the low subsonic crack speed range where it is impossible according to the classical criterion. Repercussions in terms of load and crack opening versus velocity are explained in detail. A strong qualitative influence of history-dependent criteria is observed at low subsonic crack velocities, especially in relation to achievable steady-state propagation regimes.§ INTRODUCTIONStudying fracture propagating in discrete structures results in a tool capable of analyzing a broad range of phenomena which would not emerge in the settings of continuum mechanics. The approach has found fruitful applications when dealing with crystals, cellular materials, cracks in fiber-reinforced matrices and investigations at the atomic level (e.g.<cit.>). Lattice structure models become even more crucial in the framework of dynamic propagation. With this respect, many kinds of instabilities can be predicted by intuitive considerations without the need of ad hoc hypotheses (see <cit.>). Finally, discrete models can be also treated as the discretization of the corresponding continuum problems where also a choice of the fracture criterion may play an important role when dynamic fracture propagation is in question, for example it may lead to different predictions on the stability of possible steady-state regimes (e.g. <cit.>). A lattice structure, in the dynamic scenario, is composed of concentrated masses interacting via links characterized by an interaction potential. In this paper the latter is parabolic, relating only the closest neighbors, while non-local interactions have been studied in <cit.>. The analyzed structure is mono-dimensional: a chain of oscillators, which is detached from a substrate that reflects the problem symmetry.The focus of this work is to investigate the influence of the fracture criteria of the links on the dynamic fracture propagation in such medium. Exclusively cracks which advance at constant speed, in a steady-state, are analyzed. Such regimes, indeed, have traditionally been of extreme interest in the field of dynamic fracture and have repeatedly been observed experimentally. A few classical studies can be highlighted for example in <cit.> while the topic gained new attention more recently in discrete structures such as bridges <cit.> or xyloexplosives <cit.>. Before addressing the problem of the propagation, though, the behavior of a single link is discussed hereafter. A linear elastic spring can be quasi-statically elongated to failure and its final elongation value, u_s, supposed to be known constant. The simplest and most common failure criterion neglects dynamic effects on the spring resistance. It identifies the displacement u as critical whenmint: u(t)=u_s,by which the time t=t_f when the fracture occurs can be found.A fracture event, though, in many materials turns out to be not simply determined by an instantaneous threshold value for some energy measure like the maximum elongation established above. We deal in the present work with non-instantaneous fracture criteria which nevertheless do not change the material stiffness. The rate with which a body is deformed or an integral measure of the deformation energy provided to a bond before it breaks are examined: the incubation time (IT) and the Tuler-Butcher (TB) criteria.According to the first of those formulations, the average stress, or equivalently the average linear elastic stretch, over a period of time preceding the breakage is considered as the cause of fracture. Such a period is actually called the incubation time τ. The criterion, originally formulated in terms of stresses in <cit.>, can be written for the elastic bond in object in terms of the elongation asmint:1/τt-τtu(θ)θ=u_s.Notice that u_s is still the threshold elongation of the spring when measured statically.The main idea of this approach is that a transient process does not occur instantly but includes a more complicated breakage process. Its realization demands a time period depending on the intensity and shape of loading and on the internal structure of the fractured media. The introduction of the incubation time τ as strength parameter in addition to u_s allows one to predict the stress level at the instant of fracture for a variety of loading pulses with different intensities and shapes (<cit.>). So the physical meaning of the fracture incubation time is a characteristic time determining the material's ability to resist to dynamic loading. The IT approach has shown to be reliable in different branches of mechanics and physics, such as the dynamic fracture of rocks and concretes, the dynamic yielding of metals, the acoustic ultrasonic cavitation of liquids, etc (<cit.>; <cit.>). Particularly, this criterion was successfully applied to problems of fracture in materials with pre-existing cracks. In this case the criterion can be reformulated in terms of stress intensity factor. The steady-state fracture propagation in a lattice structure is indeed crack growth and this analogy allows the application of the criterion to the problem considered here. On the other hand, it has been observed that cumulative damage can also be the cause of fracture. A way to quantify it is via the Tuler-Butcher criterion discussed in <cit.>. Again, one linear elastic bond is statically tested until it breaks at u=u_s. In a TB material, u_s represents the elongation to be exceeded in dynamics asmint:0tH(u(θ)-u_s)(u(θ)/u_s-1)^2θ=Dbefore the fracture occurs at t=t_f. Here, H(u-u_s) is the Heaviside step function by which it is possible to write that only the work of the overstretch u-u_s contributes to damage. Note that also this criterion was originally formulated in terms of stresses and that the exponent two was left general in the original formulation, but turns out to be such in most experiments. In this way the physical meaning of the criterion is that a maximum work has to be done by an external overload on the spring before it collapses. Looking at Eq.(<ref>), it turns out that, as for the IT criterion, TB materials can be regarded as one possible extension of ideal brittleness. The latter can be retrieved indeed by setting the cumulated energetic damage D to zero. The criterion has found fruitful applications in analyzing spallation, impact loading, thermal shock caused fracture in rocks, glass, aluminum, copper (see <cit.>; <cit.>; <cit.>).In order to illustrate the peculiarities of the criteria, one can imagine applying a ramp displacement of rate r at t=0 to three links with the same static strength u_s but different failure behavior. The ideally brittle one would break at t_f=u_s/r as soon as its elongation reaches u_s. The IT spring would break, accordingly to Eq.(<ref>), at t_f=u_s/r+τ/2 if r<2u_s/τ or at t_f=√(2u_sτ/r) otherwise, thus establishing a distinction between low and high deformation rates. The criterion shows a delayed failure causing an ultimate elongation bigger than the static one in this loading condition. In the case of a non-monotonic load, though, such delay might result in an elongation at failure which is smaller than the static one or during unloading (see <cit.>). The TB criterion also predicts a delayed failure, but at t_f=u_s/r+u_s√(3D^2/r^2) according to (<ref>). The difference with the IT case is that, now, an oscillating load which is strong enough to break the spring in statics will also do it in dynamics. Notice in fact that a constantly increasing cumulated damage would sooner or later surpass D (see left-hand side in Eq.(<ref>)).The rest of the paper is devoted to the model of a fracture in a structured medium subjected to the aforementioned criteria and their effects on the stable regimes of propagation. The results are expressed in terms of trapped lattice energy, applied remote force and crack tip opening. § BACKGROUNDConsider an infinite number of masses M linked to each other and to a rigid substrate via linear elastic springs of length 1 and stiffness k. A force F, applied at infinite distance, introduces energy into the system and finally breaks the links causing the crack to propagate to the right in Fig. <ref>. If x_n(t) is the link that fails at the time t, the equilibrium for a generic oscillator at x=x_i in terms of its displacement u_i(t) holds asM/ku_it=u_i+1+u_i-1-(2+H(i-n))u_i,where the Heaviside step function H(i-n) allows for the combination of the equation for the detached (i<n) and intact part (i≥ n) of the chain.Only fracture with a constant speed v, i.e. steady-state fracture, is analyzed here. In such a way the problem is reduced to the long known settings of <cit.>. The fracture can travel slower than sound in the broken structure, that isv<v_c, v_c=√(k/M).As a result of the steady-state assumption we search for a solution in terms of the unknown functionu(x_i-vt)=u_i(t),for any i and t>0.Further on we adopt a coordinate system which moves together with the crack tipη=x-x_n(t)=x-vt,in a way that the crack tip sits conveniently always at η=0. By using such new moving frame, the coordinate η accounts for time and position simultaneously. Thus, the equation of motion Eq.(<ref>) for u(η) can be written in the broken (η<0) and intact (η≥ 0) sides of the tip as:u(η)η=u(η+1)+u(η-1)-(2+H(η))u(η)/v^2/v_c^2. With the help of the mathematical tools of Fourier transform and Wiener-Hopf technique, such an equation has been repeatedly solved for this and more complex structures, for example in <cit.>, such to give the displacement profile u(η) which travels along the structure at a given steady-state crack speed v. If one intends to describe the trajectory of a single mass during time, one has to just apply Eq.(<ref>).The concept of energetic lattice trapping of a structured material was originally introduced in <cit.> and it can be quantified by the ratio G/G_0 not smaller than unity. Such energy may be introduced into the system in different ways. Analytical relations for the energy release rate G for every crack velocity have been retrieved in <cit.>. The quantityG_0=k u_0^2/2is the link strain energy which is released locally at the crack tip at the moment of fracture, where u_0=u(0). For the examined chain, G_0/G has been plotted in Fig. <ref>. In our work we assume that the energy derives from a constant force F far away and the consequent crack speed has been recently derived in <cit.> asF/k u_0=√(2G/G_0v_c+v/v_c-v):=Φ(v)and plotted in Fig. <ref>. Note that the same curve also implies that the limiting velocity v_c can not be reached via a finite force besides requiring an infinite energy release rate (as from Fig. <ref>).The assumption that the crack propagates at a constant speed also requires some additional consideration. In particular it means, for a given oscillator i sitting at x_i, that it is not allowed to break before all the links situated at x<x_i do (links on the left-hand side of Fig.<ref>). From the propagation point of view, it must be clarified that a regime which involves nucleation of daughter cracks ahead of the mother crack tip (η>0) is non-admissible. The detachment of the chain has to progress continuously. We shall discuss in the next sections how drastically the failure criteria change the admissible scenarios of stable detachment velocity. §.§ Ideally brittle linksIf the links are ideally brittle, the critical condition to be reached at the crack tip at the instant of fracture before further propagationu_0=u_sis independent of the fracture propagation speed. With such a condition, that is when u_0 and u_s are interchangeable, the diagrams of force and energy release rate in Figs. <ref> are directly applicable. Furthermore, in this case, the condition of admissibility is easily checked i.e. that no points for η>0 are lifted higher than the crack tip. Looking at Fig. <ref>, one can notice that for such materials, configurations occurring at low v are unphysical since there are points ahead of the crack tip where the failure criterion has been encountered already before the arrival of the fracture front itself and thus must be labelled as not admissible. Discussions on the matter have been dealt in <cit.>. For example, the speed 0.2 v_c does not fulfill such requirement, then this must be discarded as non-admissible. On the contrary, the speeds 0.3 v_c and 0.47 v_c are admissible. Observing all the u(η) profiles, for the isotropic chain the minimum velocity of the crack corresponds to about 0.27 v_c and all larger subsonic velocities are admissible (see Fig. <ref>). It is perhaps worth pointing out that such limit is smaller than the minimum energy release rate, which sits at about 0.38 v_c. This implies that a single G may correspond to two possible steady-states like 0.3 v_c and 0.47 v_c. Such speeds, anyway, correspond to two different loads (see Fig. <ref>). The highest of the two speeds is achieved uniquely by means of a larger force.§ PROBLEM AND METHODSWhen dealing with non-instantaneous criteria for fracture, the crack opening before fracture depends on the crack speed. In general,u(0,v)=u_0(v)must be determined accordingly to the new fracture parameters and does not simply equal u_s as for the ideal brittle criterion Eq.(<ref>). In this way, also the energy released locally at the crack tip G_0(v)=k u^2_0(v)/2 is a function of the crack speed. Further on we express F as a multiple of the force required to break the spring in a static test via the functionF/k u_s=Φ(v)u_s/u_0(v),which differs from the general Eq.(<ref>) where the denominator incorporates the elongation at failure, independently on the particular fracture criterion adopted. The reason is conceptual and follows from the possibility of conducting experiments. For obtaining the same crack speed, the loading condition, indeed, must be accurately designed depending on how the failure happens (i.e. within the context of this paper, which fracture criterion better describes the constituent material). We stick to the ratio G/G_0, instead of introducing a hypothetical G_s=k u^2_s/2, because the energy release rate incorporates the type of the structure and its deformation properties (in the present case a linear elastic chain) and can hardly be measured. Moreover, in this way, as we will discuss further in the continuation, the dependency of G/G_0 remains untouched by the particular fracture criterion characterizing the links, while the force versus velocity relation depends on the particular criterion.§.§ Incubation time criterionThe incubation time failure criterion Eq.(<ref>) can be applied to the chain by a change of variable according to Eq.(<ref>) which leads toΨ(v,τ):=1/vτ0vτu(η,v)/u_0(v)η=u_s/u_0.The normalization by the displacement at the crack tip u_0 is convenient because, in the ideally brittle case, the crack opening u_0 before fracture was known in every case and given by the maximum elongation criterion, now it is unknown and dependent on velocity. The shape u(η,v)/u_0(v) of the deformation profile, though, is given once and for all as it does not depend on the particular value of the crack opening. The advantage is that, once one calculates the shape at a certain velocity from the solution of <cit.>, this can be used for all the possible steady-state fracture criteria. If τ goes to zero, that is the material is ideally brittle, Eq.(<ref>) returns u_0=u_s coherently. By this respect, one can say that IT materials are an extension of ideally brittle ones by means of τ. Moreover, for steady-state propagation, the length vτ is constant in time and thus incubation time can be considered as a non-local criterion as well as a non-instantaneous one. §.§ Tuler-Butcher criterion In order to deal with the usual moving coordinate frame Eq.(<ref>) and a steady-state regime of velocity v, the TB criterion Eq.(<ref>) can be transformed into the equation0+∞H(u(η,v)-u_s)/vD(u(η,v)-u_s)^2/u_0(v)^2η=u_s^2/u_0^2.Taking advantage of the invariance of u(η,v)/u_0(v) with respect to η, the functionΛ(v,D)=u_s/u_0can be obtained as solution of Eq.(<ref>).§ RESULTSIn order to have dimensionless strength parameters, from this section on we express u_s in units of the distance between the masses, whereas D and τ are expressed in units of the same distance divided by the sound velocity v_c.§.§ Incubation time criterionThe steady-state analogue of the incubation time criterion in Eq.(<ref>), which also defines the function Ψ(v,τ), solves the issue of calculating the crack opening, given τ and u_su_0=u_s/Ψ(v,τ).Speaking of the force to apply for achieving a certain steady-state velocity, substituting Eq.(<ref>) into Eq.(<ref>) givesF/k u_s=Φ(v)/Ψ(v,τ). The behavior of the function Ψ(v,τ) influences the way τ modifies the crack opening with respect to the ideally brittle one (τ→ 0) and this is shown in Fig.<ref>. A linear elastic bond which exhibits a non-zero incubation time will in general allow a bigger crack opening at the instant of fracture. Crack openings smaller than u_s, though, are admissible at low velocities due to rapid oscillations and negative ∂ u/∂ t close to the tip (see Fig.<ref>). The influence of τ on the force is plotted in Fig.<ref>. Given the result of the static test on the spring u_s, if the goal is achieving a certain velocity v, an IT type material predicts that the steady-state regime would be reached in general via a bigger force than one could expect if τ is neglected. The region where the relation between force and velocity is not bijective is stretched to the right and the difference in velocities for the same force decreases steadily while raising the incubation time. Note that τ does not play any role in the limiting case of zero velocity of propagation, where the forceproduces the same results as for the ideally brittle spring (see Eq.(<ref>) for v→ 0).In order to verify the analytical solution, we searched for the steady-state regimes via solving Eq.(<ref>) by a finite difference scheme similarly as in <cit.> where the same numerical procedure is extensively explained. A chain of 2000 masses was loaded with a distant vertical constant force. The instant of fracture for a link was identified according to the condition Eq.(<ref>). After iterating, the next failures tended to occur at constant time pace and such an interval was used for calculating the stable crack speed for a given force. The analytical solution is perfectly matched and the numerical approach confirms that a steady-state propagation is not achievable at low velocities (results not shown).This kind of computation is quite heavy because of the algorithm adopted to identify the time of fracture at every location x_i. Before a steady fracture propagation is reached, indeed, during the transient regime, the history u_i(t) in the last interval τ must be recorded and the integral Eq.(<ref>) updated for every x_i. This marks a principal difference with the instantaneous traditional criterion in which case the quick check u≤ u_s is sufficient. We have tried to simplify the procedure via making the criterion pseudo-instantaneous. Theoretically, Eq.(<ref>) should be valid only for steady-state fracture. Nevertheless, if one estimates the instantaneous crack velocity ṽ(t) from the last two failures, the criterion can be artificially reduced to u≤ u_0(ṽ,τ) instead of calculating the integral Eq.(<ref>) at all. Such an attempt has been proven to be effective beyond expectations for the particular studied problem in achieving the same steady-states as in theory and the rigorous numerical simulation for same applied load.Another crucial effect of τ>0 on the crack propagation is that it monotonically enlarges the regions of achievable steady-states as illustrated in Fig. <ref>. For instance, the speed 0.2 v_c, that is non-admissible for an ideally brittle material with critical elongation parameter u_s, can be reached with τ=3 and bigger. §.§ Tuler-Butcher criterion By means of Eq.(<ref>), one can retrieve the crack openingu_0=u_s/Λ(v,D)associated to all the combinations of crack speed and D. As consequence, keeping the force proportional to the material property u_s instead of the velocity dependent crack opening, Eq.(<ref>) becomesF/k u_s=Φ(v)/Λ(v,D). The plots in Fig.<ref> allow us to visualize how the dynamic strength parameter D affects the chain behavior. As observed for IT materials, the immediate impact of TB damage accumulation results in an augmented crack opening at equal crack speed as an ideally brittle material showing the same static strength u_s, at least in the range of medium or high v/v_c (see Fig.<ref>). The force needed for obtaining a desired velocity is depicted in Fig.<ref>. It is evident that the capability of the material to bear a certain work of the overstretch before failure, i.e. bigger D, makes the chain detachment increasingly slower for same F/k u_s. A structure of TB bonds can be predicted to be dynamically tougher than its ideally brittle counterpart. For D→ 0 and low fracture speed the complex structure does not respond like an ideally brittle one. Looking at Fig.<ref>, indeed, one can notice that for low v, i.e. in the range where u(η) does not decrease monotonically ahead of the crack tip, the criterion in the form Eq.(<ref>) may return u_0<u_s. Such a feat differs from all the cases analyzed in this work: single bonds as well as the ideally brittle and IT crack tips always fail for u≥ u_s. However, in the cases where the latter inequality is violated in a TB link, we obtain non-admissible steady-state propagation regimes. A point can be made, therefore, that a theoretical limit for the crack opening is for a TB material to satisfy u_0≥ u_s.Like for incubation time materials, as it can be seen in Fig. <ref>, new zones of admissibility appear in the low velocity region for larger D. Nevertheless, there is a significant qualitative difference with the incubation time situation: such admissible intervals emerge small and scattered, but then, with increasing D, expand gradually and merge until every subsonic crack speed can be obtained for D close to unity.§ DISCUSSION AND CONCLUSIONSThe dynamic fracture propagation in discrete structures has been investigated in a considerable amount of possible scenarios (see references above) but the influence of failure criteria different from a threshold stress has not gained the attention that it deserves despite the fact that non-instantaneous criteria have already been shown to be reliable in continuum mechanics (e.g. as recently discussed in <cit.>). As a first step to fill this gap, two time-dependent criteria have been analyzed in detail when applied to the dynamic fracture propagation of a chain of oscillators and they have been compared to the classical ideal brittle fracture.In both cases, enhanced admissibility have appeared at low crack speeds and mapped in Fig. <ref>. An increasing incubation time τ enlarges the admissibility continuously but never covers all the subsonic crack speeds. More than that, a TB material which requires a larger work of the overstretch for fracture and characterized by a bigger D also creates completely new zones of achievable steady-states and it is predicted that all the subsonic range is possible if D≳ 0.13.Speaking of the steady-state crack opening, the time-dependent criteria cause a delay in fracture after reaching the static strength of the bonds. This means that in most cases one should expect u_0>u_s like it would happen when monotonically elongating a single spring. At low v, though, this is not the behavior caused by a constant force applied on a complex structure. Ample and rapid oscillations ahead of the crack tip cause the delayed fracture to happen at u_0<u_s. While such propagation regimes are admissible at high τ for incubation materials, the same is not true for TB ones (see Figs.<ref>-<ref>). The mathematical form of the latter failure criterion indeed excludes such steady-states on the grounds that daughter cracks would jeopardize the steady-state assumption. In short, a theoretical limit has been found which states that for a TB chain a dynamic fracture can propagate at constant speed only if u_0≥ u_s.As intuition suggests, the two examined non-instan­taneous criteria make the structure tougher than the corresponding ideal brittle one with the same static strength u_s. Thus, for obtaining a certain velocity v one needs a bigger force if τ or D increase. In this way, the curves in Figs.<ref>-<ref> result also in an important practical application. With a few experiments on materials whose u_s has been independently measured, the couples F-v allow for the material characterization in terms of the second fracture parameter τ or D at least if stable crack speeds are retrieved in the monotonic interval of the curves (medium and high v/v_c).The velocity dependent energy release rate ratio G/G_0 is a solution which is irrespective of the particular fracture criterion adopted. It is also valid regardless of the way the energy is introduced into the system. In the present work we use a constant force as the external load. However, in case one prefers to implement different kind of loading, like for instance in <cit.> when dealing with lattices, or for example to facilitate a specific experimental procedure, the relation between the new load and the crack speed has to be evaluated, while the energy - crack speed diagram remains the same. Two numerical integration schemes have been used to solve the set of governing equations in the IT scenario and compared with the analytical solution derived in the present paper. The first of these verified the criterion condition in its integral form Eq.(<ref>) at every time step for all the unbroken bonds. Such an approach enables one to simulate also the transient regimes before a steady-state is reached. The steady-states were achieved only in the admissible regions of Fig.<ref> and there the force-velocity relations agreed perfectly with the ones in Fig.<ref>. The second test was performed by adopting a pseudo-static failure criterion: a dynamic threshold elongation u_0(ṽ,τ) was established based on the instantaneous crack velocity ṽ and Eq.(<ref>). This simplified algorithm performed much faster than the first one, and still returned correct results. For this specific structure at least then, it seems that many of the conclusions drawn here can still be valid in the transient regimes.Beyond the particular scope of this paper, various propagation regimes, in absence of crack arrest, can appear: steady-state, other regular ones (clustering or forerunning as discussed in <cit.>) or chaotic regimes. The realization of one or another heavily depends on the loading type, its intensity and on the structure itself. However, when the problem is faced from a mathematical point of view, assumptions of steady-state regimes have always been made in order to obtain simple solutions. With the analytical results in one hand, an a posteriori examination is required which identifies where the solution fulfills the assumptions and constraints: only that part of the solution is labeled as admissible. Generally speaking, though, it does not mean that all those regimes will emerge in practice as steady-state. In most of the cases it happens; nevertheless, as it has been shown in numerical simulations, other ordered regimes of propagation may arise such as clustering. In such circumstances, the steady-state velocity predicted theoretically reveals as the average speed at which the cluster moves (see <cit.> on the matter).We have not treated the problem of branching in the present settings of history-dependent criteria. It has turned out already in <cit.> that such instabilities can become relevant at high crack speeds. In the considered geometry, loading condition and material parameters, the branching mostly happensalong the crack surfaces but not on the crack line ahead. That is evident from the solution profile u(η) for v/v_c=0.2 in Fig.<ref>. If the horizontal springs show the same dynamic resistance as the vertical ones, their fracture can precede the chain detachment from the substrate, making a steady-state propagation impossible and sensibly reducing the limiting speed with respect to v_c. The admissibility check would imply that, for none of the consecutive oscillators, the difference u_i+1-u_i) does reach the condition imposed by the fracture criterion. More complex scenarios may occur with structures characterized by flexural stiffness, heterogeneities or localized feeding waves <cit.>. Furthermore, a steady-state regime can be unattainable, resulting in unstable or alternating velocities, when the structure is not loaded far from the crack tip, but via accumulated energy in the form of residual stresses of the bonds <cit.>. Complications have also been object of investigation in the framework of bridged cracks <cit.>.In conclusion, the fracture criteria considered here sensibly affect the dynamic propagation of cracks in discrete structures. The effects are particularly important both in terms of force vs velocity relations and in new regimes of admissibility at low crack speeds. Succinctly, by the present results, we are able to underline two main messages: • it is at low speed regimes that an experimental investigation should be carried out more carefully for understanding whether it is necessary to incorporate history-dependent fracture criteria in the dynamic fracture model;• the energy release rate ratio and shapes of the displacement profiles as functions of the velocity are invariants, in linear theory, and can promptly be used and adapted to the most suitable fracture criterion for the analyzed problem. The possible outlook of this research is the application of the approach to a) more complex lattice structures (inhomogeneous, triangle ones) as in <cit.>; b) highly ordered bi-dimensional lattices, for instance to crack propagation in graphene layers <cit.> or c) to the unbinding of long protein chains whose analysis has been made feasible by the improvements in the field of atomic force microscopy and for which the bonds strength has already shown to be eminently dependent on the strain rate <cit.>. As the discretization, the considered here model can be useful for modeling of the peel test of flexible films (see e.g. <cit.>).§ ACKNOWLEDGEMENTSN. Gorbushin and G. Vitucci acknowledge support from the EU project CERMAT2 (PITN-GA- 2013-606878), G. Mishuris and G. Volkov are thankful to the EU project TAMER (IRSES-GA-2013-610547). G. Mishuris also thanks the UK Royal Society Wolfson Research Merit Award, while G. Volkov received support also from RFBR (17-01-00618, 16-51-53077). We wish to express our gratitude to Michael Nieves for interesting discussions.ieeetr
http://arxiv.org/abs/1704.08222v3
{ "authors": [ "Nikolai Gorbushin", "Gennaro Vitucci", "Grigory Volkov", "Gennady Mishuris" ], "categories": [ "cond-mat.mtrl-sci" ], "primary_category": "cond-mat.mtrl-sci", "published": "20170426172126", "title": "Influence of fracture criteria on dynamic fracture propagation in a discrete chain" }
Interpenetrating Graphene Networks: Three-dimensional Node-line Semimetals with Massive Negative Linear Compressibilities R. E. Cohen^1,3 December 30, 2023 ========================================================================================================================= Video analytics requires operating with large amounts of data. Compressive sensing allows to reduce the number of measurements required to represent the video using the prior knowledge of sparsity of the original signal, but it imposes certain conditions on the design matrix. The Bayesian compressive sensing approach relaxes the limitations of the conventional approach using the probabilistic reasoning and allows to include different prior knowledge about the signal structure. This paper presents two Bayesian compressive sensing methods for autonomous object detection in a video sequence from a static camera. Their performance is compared on the real datasets with the non-Bayesian greedy algorithm. It is shown that the Bayesian methods can provide the same accuracy as the greedy algorithm but much faster; or if the computational time is not critical they can provide more accurate results. § INTRODUCTIONThe significant developments in the field of sparse models during the last decades lead to the opening of the new research and application fields. One of the first application for sparse modelling is the linear regression problem where l_0 and l_1-norm regularisation is considered. The latter has the advantage that a regulariser term is convex, while it has not so obvious sparse interpretation <cit.>. Sparse modelling is further developed in the field of signal processing in compressive sensing <cit.>, where the main idea is to minimise the number of measurements of the signal without loss of the decoding accuracy. Compressive sensing concerns the two main problems: selecting the optimal design matrix and solving ill-posed regression, that arises in the original signal decoding from the measurements <cit.>. The idea of sparse Bayesian modelling is mentioned in <cit.>. This imposes the sparsity-inducing Laplace prior on the data, but does not give the inference for the whole distribution, only a maximum aposteriori probability estimate. The full inference to this model is provided in <cit.>, using the Expectation Propagation (EP) technique. Another work is <cit.>, where the prior is modified to the hierarchical Gauss-Gamma distribution. These models are used as a basis for Bayesian compressive sensing in <cit.> and <cit.>. The recent monograph <cit.> presents the sparse modelling application for image and video processing. One of the essential problems in video processing is foreground detection which is mostly solved by background subtraction. Background subtraction aims to distinguish foreground (moving objects) from background (static ones). Sparseness is natural for the background subtraction problem as the foreground objects occupy the small regions on a frame. Background subtraction hence represents a natural application area for sparse modelling. The idea to apply compressive sensing for background subtraction is originally proposed in <cit.> and developed in <cit.>. In contrast to these works in this paper we focus on the sparse Bayesian methods for background subtraction and the comprehensive comparison of these methods with the conventional compressive sensing one. The contribution of this paper is in applying the Bayesian compressive sensing approach for the background subtraction problem. As far as the authors know, this approach for moving object detection has not been considered yet. Also several algorithms are overviewed and compared to evaluate their applicability in different situations.This paper is organised as follows. In Section <ref> the proposed model is explained. The experimental results are represented in Section <ref>. Section <ref> concludes the paper and discusses the future work. § FRAMEWORK Assume that we have a static camera and we can acquire a frame 𝐁∈ℝ^n_1 × n_2 from the camera that is referenced as the background. The video from the camera consists of the sequential frames 𝐕_k ∈ℝ^n_1 × n_2, k∈{1,…,K}. The aim is to estimate the mask of the foreground objects in these frames.§.§ Video preprocessingWe convert the source video frames to greyscale. The background frame 𝐁 is converted to a vector 𝐛∈ℝ^n, the video frames 𝐕_k are converted to vectors 𝐯_k ∈ℝ^n, where n=n_1 n_2. §.§ Compressive sensingTypically the foreground objects take only a part of the image. Therefore the foreground mask 𝐟_k=𝐯_k-𝐛 has many values that are close to zero. This intuition can be represented as an assumption of𝐟_k_l_0≤ s ≪ n,l_0-pseudonorm is the number of non-zero elements of a vector. We apply the compressive sensing theory to this problem. It reduces the number of measurements that need to be taken <cit.> and also the results may be denoised <cit.>. The values of the foreground mask are estimated based on the set of the compressed measurements 𝐠_k ∈ℝ^s: 𝐠_k = Φ𝐟_k,where the design matrix Φ∈ℝ^s × n consists of i.i.d Gaussian variables. It is selected according to the method proposed in <cit.>.Since 𝐟_k=𝐯_k-𝐛, the estimates of the coefficients 𝐠_k can be done on the acquisition step as 𝐠_k = Φ𝐟_k = Φ𝐯_k - Φ𝐛The vectors Φ𝐛 and Φ𝐯_k are the linear combinations of the pixels of the video frames, therefore a single pixel camera may be used. The problem of the foreground mask reconstruction is more difficult. The linear system (<ref>) is underdetermined when n > s therefore an infinite amount of solutions exists. The problem can be determined by the regulariser imposing in the assumption that the signal 𝐟_k has a sparse structure. The common regularisers that are used in compressive sensing are minimisers of the l_p norm, where p < 2.The conventional methods to solve such systems are following <cit.>:* l_0 - minimisation. The greedy algorithms based on least squares estimates, stochastic search, variational inference;* l_1 - minimisation. Coordinate descent, LARS, the proximal and gradient projection methods;* Non-convex minimisation. Bridge regression, hierarchical adaptive lassoIn this paper we will focus on the Bayesian methods <cit.> and compare them with orthogonal matching pursuit (OMP) <cit.>, that is a greedy algorithm for l_0-minimisation. The following represents the brief review of these methods. §.§.§ Bayesian compressive sensing (BCS)The system (<ref>) is reformulated as a linear regression model in <cit.>:𝐠_k = Φ𝐟_k + ξ,where ξ is a vector which elements are the independent noise from the Gaussian distribution: ξ_i∼𝒩(ξ_i;0,β^-1). Therefore the likelihood can be expressed asp(𝐠_k | 𝐟_k, β) = ∏_i=1^n𝒩(g_i, k;Φ_i𝐟_k,β^-1),where g_i, k is the i-th element of the vector 𝐠_k, Φ_i – the i-th row of the matrix Φ.To implement the full Bayesian approach, the prior distributions are imposed on all parameters:p(𝐟_k | α) = ∏_i=1^n𝒩(f_i, k;0,α_i^-1),where f_i, k is the i-th element of the vector 𝐟_k, α is a prior parameter vector, α_i is the i-th element of the vector α;p(α) = ∏_i=1^nΓ(α_i;a,b), p(β) = Γ(β;c,d),where Γ(·) denotes the Gamma distribution. The values of the hyperparameters a, b, c, d are set uniform and close to zero. According to the Bayes rule the posterior distribution can be written as follows:p(𝐟_k, α, β | 𝐠_k) = p(𝐠_k|𝐟_k, α, β)p(𝐟_k, α, β)/p(𝐠_k),where p(𝐠_k|𝐟_k, α, β) is the likelihood term, p(𝐟_k, α, β) is the prior term, p(𝐠_k) is the evidence term. The latter can be expressed as: p(𝐠_k) = ∫_𝐟_k, α, β p(𝐠_k|𝐟_k, α, β) p(𝐟_k, α, β) d𝐟_k dαdβThis integral is intractable, therefore some kind of approximation should be used.In Bayesian compressive sensing <cit.> the decomposition of the posterior probability into the product of the tractable and intractable probabilities is used and the intractable one is approximated with the delta-function in its mode:p(𝐟_k, α, β | 𝐠_k) = p(𝐟_k |𝐠_k, α, β)p(α, β | 𝐠_k) The Bayes rule for the first term of (<ref>) is as follows: p(𝐟_k | 𝐠_k, α, β) =p(𝐠_k | 𝐟_k, β) p(𝐟_k | α)/p(𝐠_k | α, β)These are all the Gaussians, so the probability p(𝐟_k | α, β, 𝐠_k) can be calculated straightforwardly. It is the Gaussian distribution with the parameters Σ = (βΦ^⊤Φ + 𝐀)^-1,μ = βΣΦ^⊤𝐠_k,where 𝐀 = diag(α_1, …, α_n).The second term of the posterior probability (<ref>) can be expressed as:p(α, β | 𝐠_k) = p(𝐠_k | α, β) p(α) p(β)/p(𝐠_k) As it has been already shown, the denominator here is not tractable. The most probable values of α, β are used. The hyperpriors are uniform, therefore only the term p(𝐠_k | α, β) needs to be maximised:p(𝐠_k | α, β) = ∫ p(𝐠_k |𝐟_k, β) p(𝐟_k | α) d𝐟_kMaximisation of (<ref>) w.r.t α, β gives the following iterative process:α_i^new = γ_i/μ_i^2, (β^-1)^new = 𝐠_k - Φμ^2_l_2/s-Σ_iiγ_i,where γ_i = 1 - α_iΣ_iiThis process together with (<ref>) - (<ref>) converges to the optimal estimates.Note that p(f_i, k) = b^aΓ(a+1/2)/(2π)^1/2Γ(a)(b + f_i, k^2/2)^-(a+1/2) This is the Student-t distribution, that has the most probable area concentrated around zero. Thereby it leads to the sparse vector 𝐟_k.The graphical model is displayed in Figure <ref>.§.§.§ Multitask Bayesian compressive sensing (Multitask BCS)In <cit.> the Bayesian method to process several signals that have a similar sparse structure is proposed. The multitask setting reduces the number of measurements that should be taken comparing to processing all the signals independently. The hyperparameter α is considered to be shared by all the tasks. The graphical model is displayed in Figure <ref>. §.§.§ Matching PursuitThe greedy algorithms are proposed for the l_0 minimisation in <cit.>. These methods start with a null vector and iteratively add variables to it until a convergence to a threshold.§ EXPERIMENTSWe use the Convoy dataset <cit.>, which consists of 260 greyscale frames and the background frame. The frames are scaled to the less resolution of 128 × 128 to avoid memory problems. For the multitask algorithm the batches of 40 frames are run together, while for the Bayesian compressive sensing and OMP algorithms all the frames are processed independently. There are two sets of the experiments: one with s = 2000 measurements and the other with s = 5000 measurements. For both sets of the experiments all three methods are run for 10 times with 10 different design matrices Φ shared among the methods. For the quantitative comparison the median values of quality measures among these runs are presented.The qualitative comparison of the models with the same design matrix Φ is displayed in Figures <ref> - <ref>. The three demonstrative frames are presented. One can notice that with the same design matrix the models demonstrate similar results. The figures show that 2000 measurements can be used for object region detection, while 5000 measurements which is only about 30% of the input resolution are enough even to distinguish parts of the objects like doors and windows of the cars. For the quantitative comparison of the results the following measures are used: * Reconstruction error: 𝐟-𝐟̂_l_2𝐟_l_2, where 𝐟 is the signal ground truth, 𝐟̂ is the signal, reconstructed by the algorithm;* Background subtraction quality measure (BS quality): |S( 𝐟 ) ∩ S(𝐟̂)||S( 𝐟) ∪ S(𝐟̂)|, where S(𝐟) is the ground truth foreground pixels, S(𝐟̂) is the algorithm detected foreground pixels, |·| is the cardinality of the set; * Peak signal-to-noise ratio (PSNR): 10log_10(peakval^2MSE), where peakval is the maximum possible pixel value, that is 255 in our case. MSE is the mean square error between 𝐟 and 𝐟̂;* Structural similarity index (SSIM): (2μ_𝐟μ_𝐟̂+C_1)(2σ_𝐟𝐟̂+C_2)(μ_𝐟^2 + μ_𝐟̂^2+C_1)(σ_𝐟^2 + σ_𝐟̂^2+C_2), where μ_𝐟, μ_𝐟̂, σ_𝐟, σ_𝐟̂, σ_𝐟𝐟̂ are the local means, standard deviations, and cross-covariance for the images 𝐟, 𝐟̂ respectively, and C_1, C_2 are the regularisation constantsThe difference between the uncompressed current frame 𝐯_k and the uncompressed background frame 𝐛 is used as the ground truth signal 𝐟 for every frame (the second columns in Figures <ref> - <ref>), since this is the signal which is compressed by (<ref>). The results are presented in Figure <ref>. All the quality measures – reconstruction error, BS quality, PSNR and SSIM – are calculated for every frame. The mean values among the frames for each measure can be found in Tables <ref> – <ref>.The computational time is provided for a batch of 40 frames (BCS and OMP process each frame independently with 4 parallel workers, multitask BCS processes all 40 frames together). Implementation is made on the laptopwith i7-4702HQ CPU with 2.20GHz, 16 GB RAM using MATLAB 2015aMultitask Bayesian compressive sensing demonstrates the best results according to almost each measure. Bayesian compressive sensing and OMP show the competitive results but Bayesian compressive sensing works faster. It is worth to note that multitask Bayesian compressive sensing has the biggest variance among the runs with the different design matrices, while the variances of the Bayesian compressive sensing and OMP runs for the same matrices are quite small. § CONCLUSIONS AND FUTURE WORK This work presents two Bayesian compressive sensing algorithms in the application of background subtraction. These are the applications of the conventional Bayesian compressive sensing and of the multitask Bayesian compressive sensing algorithms. The large size of the video frames leads to the high computational time for all methods, that is presented in Tables <ref> – <ref>. However, the results presented in Figures <ref> – <ref> demonstrate the appropriate reconstruction quality of the original image based on only 5000 measurements (that is ≈ 30% of the original image size).The conventional Bayesian compressive sensing method demonstrates the similar results to the greedy algorithm OMP but BCS is more effective in terms of the computational time. If the computational time is not critical the extension of the Bayesian method designed for a multitask problem can improve the performance in terms of the different measures. Therefore other extensions of the Bayesian method to include the prior information need further research. The following problems can be addressed in future work. Further research can be done on implementing different sparse Bayesian methods. The EP-based framework with the Laplace prior proposed in <cit.> can be compared in terms of computational times and reconstruction errors. It uses the different inference scheme and prior, so the results should be different. Also the Markov Chain Monte Carlo (MCMC) <cit.> framework can be added to the comparison.The current methods assume that the components of the foreground intensities are not correlated. For most cases the objects are grouped into several clusters, therefore more sophisticated sparsity models can be introduced to reflect the structure of the foreground. The Bayesian framework allows to implement such modifications. Exploring the applications in video tracking is one more avenue for further research.§ ACKNOWLEDGEMENTSThe authors Olga Isupova and Lyudmila Mihaylova are grateful for the support provided by the EC Seventh Framework Programme [FP7 2013-2017] TRAcking in compleX sensor systems (TRAX) Grant agreement no.: 607400. Lyudmila Mihaylova acknowledges also the support from the UK Engineering and Physical Sciences Research Council (EPSRC) via the Bayesian Tracking and Reasoning over Time (BTaRoT) grant EP/K021516/1.
http://arxiv.org/abs/1705.00002v1
{ "authors": [ "Danil Kuzin", "Olga Isupova", "Lyudmila Mihaylova" ], "categories": [ "cs.CV", "stat.ML" ], "primary_category": "cs.CV", "published": "20170427202433", "title": "Compressive Sensing Approaches for Autonomous Object Detection in Video Sequences" }
An outburst powered by the merging of two stars inside the envelope of a giant Shlomi Hillel1, Ron Schreier1, andNoam Soker1 December 30, 2023 ==============================================================================Word embeddings provide point representations ofwords containing useful semantic information.We introduce multimodal word distributions formed from Gaussian mixtures, formultiple word meanings, entailment, and richuncertainty information.To learn these distributions, we propose an energy-based max-margin objective. We show that the resulting approach captures uniquelyexpressive semantic information, andoutperforms alternatives, such as word2vec skip-grams, and Gaussian embeddings, onbenchmark datasets such as word similarity and entailment. § INTRODUCTION To model language, we must represent words.We can imagine representing every word with a binary one-hot vector corresponding to a dictionary position.But such a representation contains no valuable semantic information: distances between word vectors represent only differences in alphabetic ordering.Modern approaches, by contrast, learn to map words with similar meanings to nearby points in a vector space <cit.>, from large datasets such as Wikipedia.These learned word embeddings have become ubiquitous in predictive tasks. <cit.> recently proposed an alternative view, where words are represented by a whole probability distribution instead of a deterministic point vector.Specifically, they model each word by a Gaussian distribution, and learn its mean and covariance matrix from data.This approach generalizes any deterministic point embedding, which can be fully captured by the mean vector of the Gaussian distribution.Moreover, the full distribution provides much richer information than point estimates for characterizing words, representing probability mass and uncertainty across a set of semantics.However, since a Gaussian distribution can have only one mode, the learned uncertainty in this representation can be overly diffuse for words with multiple distinct meanings (polysemies), in order for the model to assign some density to any plausible semantics <cit.>. Moreover, the mean of the Gaussian can be pulled in many opposing directions, leading to a biased distribution that centers its mass mostly around one meaning while leaving the others not well represented.In this paper, we propose to represent each word with an expressive multimodal distribution, for multiple distinct meanings, entailment, heavy tailed uncertainty, and enhanced interpretability.For example, one mode of the word `bank' could overlap with distributions for words such as `finance' and `money', and another mode could overlap with the distributions for `river' and `creek'. It is our contention that such flexibility is critical for both qualitatively learning about the meanings of words, and for optimal performance on many predictive tasks.In particular, we model each word with a mixture of Gaussians (Section <ref>).We learn all the parameters of this mixture model using a maximum margin energy-based ranking objective <cit.> (Section <ref>), where the energy function describes the affinity between a pair of words.For analytic tractability with Gaussian mixtures, we use the inner product between probability distributions in a Hilbert space, known as the expected likelihood kernel <cit.>, as our energy function (Section <ref>).Additionally, we propose transformations for numerical stability and initialization <ref>, resulting in a robust, straightforward, and scalable learning procedure, capable of training on a corpus with billions of words in days. We show that the model is able to automatically discover multiple meanings for words (Section <ref>), andsignificantly outperform other alternative methods across several tasks such as word similarity and entailment (Section <ref>, <ref>, <ref>).We have made code available at <http://github.com/benathi/word2gm>, where we implement our model in Tensorflow .§ RELATED WORKIn the past decade, there has been an explosion of interest in word vector representations. word2vec, arguably the most popular word embedding, uses continuous bag of words and skip-gram models, in conjunction with negative sampling for efficient conditional probability estimation <cit.>.Other popular approaches use feedforward <cit.> and recurrent neural network language models <cit.> to predict missing words in sentences, producing hidden layers that can act as word embeddings that encode semantic information.They employconditional probability estimation techniques, including hierarchical softmax <cit.> and noise contrastive estimation <cit.>.A different approach to learning word embeddings is through factorization of word co-occurrence matrices such asGloVe embeddings <cit.>.The matrix factorization approach has been shown to have an implicit connection with skip-gram and negative sampling <cit.>. Bayesian matrix factorization whererow and columns are modeled as Gaussians has been explored in <cit.> and provides a different probabilistic perspective of word embeddings.In exciting recent work, <cit.> propose a Gaussian distribution to model each word.Their approach is significantly more expressive than typical point embeddings, with the ability to represent concepts such as entailment, by having the distribution for one word (e.g. `music') encompass the distributions for sets of related words (`jazz' and `pop'). However, with a unimodal distribution, their approach cannot capture multiple distinct meanings, much like most deterministic approaches. Recent work has also proposed deterministic embeddings that can capture polysemies, for example through a cluster centroid of context vectors <cit.>, or an adapted skip-gram model with an EM algorithm to learn multiple latent representations per word <cit.>.<cit.> also extends skip-gram with multiple prototype embeddings where the number of senses per word is determined by a non-parametric approach. <cit.> learns topical embeddings based on latent topic models where each word is associated with multiple topics.Another related work by <cit.> models embeddings in infinite-dimensional space where each embedding can gradually represent incremental word sense if complex meanings are observed. Although independent of our work, we later found that <cit.> proposed a similar model to ours; however, our setup obtains significantly improved results on all evaluation metrics.Probabilistic word embeddings have only recently begun to be explored, and have so far shown great promise.In this paper, we propose probabilistic word embedding that can capture multiple meanings. We use a Gaussian mixture model which allows for a highly expressive distributions over words.At the same time, we retain scalability and analytic tractability with an expected likelihood kernel energy function for training. The model and training procedure harmonize to learn descriptive representations of words, withsuperior performance on several benchmarks.§ METHODOLOGYIn this section, we introduce our Gaussian mixture (GM) model for word representations, and present a training method to learn the parameters of the Gaussian mixture.This method uses an energy-based maximum margin objective, where we wish to maximize the similarity of distributions of nearby words in sentences.We propose an energy function that compliments the GM model by retaining analytic tractability.We also provide critical practical details for numerical stability, hyperparameters, and initialization. §.§ Word Representation We represent each word w in a dictionary as a Gaussian mixture with K components.Specifically, the distribution of w,f_w, is given by the densityf_w(x⃗)= ∑_i=1^K p_w,i [ x⃗; μ⃗_w,i , Σ_w,i] = ∑_i=1^Kp_w,i/√(2 π | Σ_w,i | ) e^-1/2 (x⃗ - μ⃗_w,i)^⊤Σ_w,i^-1 (x⃗ - μ⃗_w,i) , where ∑_i=1^K p_w,i = 1. The mean vectors μ⃗_w,i represent the location of the i^th component of word w, and are akin to the point embeddings provided by popular approaches like . p_w,i represents the component probability (mixture weight), and Σ_w,i is the component covariance matrix, containing uncertainty information.Our goal is to learn all of the model parameters μ⃗_w,i, p_w,i, Σ_w,i from a corpus of natural sentences to extract semantic information of words. Each Gaussian component's mean vector of word w can represent one of the word's distinct meanings. For instance, one component of a polysemous word such as `rock' should represent the meaning related to `stone' or `pebbles', whereas another component should represent the meaning related to music such as `jazz' or `pop'. Figure  <ref> illustrates our word embedding model, and the difference between multimodal and unimodal representations, for words with multiple meanings. §.§ Skip-Gram The training objective for learning θ = {μ⃗_w,i, p_w,i, Σ_w,i} draws inspiration from the continuous skip-gram model <cit.>, where word embeddings are trained to maximize the probability of observing a word given another nearby word. This procedure follows the distributional hypothesis that words occurring in natural contexts tend to be semantically related.For instance, the words `jazz'and `music' tend to occur near one another more often than `jazz' and `cat'; hence, `jazz' and `music' are more likely to be related. The learned word representation contains useful semantic information and can be used to perform a variety of NLP tasks such as word similarity analysis, sentiment classification, modelling word analogies, or as a preprocessed input for complex system such as statistical machine translation.§.§ Energy-based Max-Margin ObjectiveEach sample in the objective consists of two pairs of words, (w,c) and (w,c'). w issampled from a sentence in a corpus and c is a nearby word within a context window of length ℓ. For instance, a word w = `jazz' which occurs in the sentence `I listen to jazz music' has context words (`I', `listen', `to' , `music'). c' is a negative context word (e.g. `airplane') obtained from random sampling. The objective is to maximize the energy between words that occur near each other, w and c, and minimize the energy between w and its negative context c'. This approach is similar to negative sampling <cit.>, which contrasts the dot product between positive context pairs with negative context pairs. The energy function is a measure of similarity between distributions and will be discussed in Section <ref>. We use a max-margin ranking objective <cit.>, used for Gaussian embeddings in <cit.>, which pushes the similarity of a word and its positive context higher than that of its negative context by a margin m:L_θ (w, c, c') = max(0, m - log E_θ(w, c)+ log E_θ(w, c') )This objective can be minimized by mini-batch stochastic gradient descent with respect to the parameters θ = {μ⃗_w,i, p_w,i, Σ_w,i} – the mean vectors, covariance matrices, and mixture weights – of our multimodal embedding in Eq. (<ref>).Word Sampling We use a word sampling scheme similar to the implementation in word2vec <cit.> to balance the importance of frequent words and rare words. Frequent words such as`the', `a', `to' are not as meaningful as relatively less frequent words such as `dog', `love', `rock', and we are often more interested in learning the semantics of the less frequently observed words. We use subsampling to improve the performance of learning word vectors <cit.>. This technique discards word w_i with probability P(w_i) = 1 - √(t/f(w_i)), where f(w_i) is the frequency of word w_i in the training corpus and t is a frequency threshold.To generate negative context words, each word type w_i is sampled according to a distribution P_n(w_i) ∝ U(w_i)^3/4 which is a distorted version of the unigram distribution U(w_i) that also serves to diminish the relative importance of frequent words.Both subsampling and the negative distribution choice are proveneffective in word2vec training <cit.>.§.§ Energy FunctionFor vector representations of words, a usual choice for similarity measure (energy function) is a dot product between two vectors. Our word representations are distributions instead of point vectors and therefore need a measure that reflects not only the point similarity, but also the uncertainty. §.§.§ Expected Likelihood Kernel We propose to use the expected likelihood kernel, which is a generalization of an inner product between vectors to an inner product between distributions <cit.>.That is, E(f,g) =∫ f(x) g(x)d x =⟨ f, g ⟩_L_2where ⟨·, ·⟩_L_2 denotes the inner product in Hilbert space L_2. We choose this form of energy since itcan be evaluated in a closed form given our choice of probabilistic embedding in Eq. (<ref>).For Gaussian mixtures f,g representing the words w_f, w_g,f(x) = ∑_i=1^K p_i (x; μ⃗_f,i , Σ_f,i ) and g(x)=∑_i=1^K q_i (x; μ⃗_g,i , Σ_g,i ), ∑_i =1^K p_i = 1, and ∑_i =1^K q_i = 1, we find (see Section <ref>) the log energy is log E_θ(f,g) = log∑_j=1^K ∑_i=1^K p_i q_j e^ξ_i,jwhereξ_i,j ≡log(0; μ⃗_f,i - μ⃗_g,j, Σ_f,i + Σ_g,j ) = - 1/2log( Σ_f,i + Σ_g,j ) - D/2log (2 π) - 1/2(μ⃗_f,i - μ⃗_g,j )^⊤ (Σ_f,i + Σ_g,j )^-1 (μ⃗_f,i - μ⃗_g,j )We call the term ξ_i,j partial (log) energy. Observe that this term captures the similarity between the i^th meaning of word w_f and the j^th meaning of word w_g. The total energy in Equation <ref>is thesum of possible pairs of partial energies, weighted accordingly bythe mixture probabilities p_i and q_j. The term - (μ⃗_f,i - μ⃗_g,j )^⊤ (Σ_f,i + Σ_g,j )^-1 (μ⃗_f,i - μ⃗_g,j ) inξ_i,j explains the difference in mean vectors of semantic pair (w_f, i) and (w_g, j). If the semantic uncertainty (covariance) for both pairs are low, this term has more importance relative to other terms due to the inverse covariance scaling. We observe that the loss function L_θ in Section <ref> attains a low value when E_θ(w,c) is relatively high. High values of E_θ(w,c) can be achieved when the component means across different words μ⃗_f,i and μ⃗_g,j are close together (e.g., similar point representations).High energy can also be achieved by large values of Σ_f,i and Σ_g,j, which washes out the importance of the mean vector difference. The term - log( Σ_f,i + Σ_g,j ) serves as a regularizer that prevents the covariances from being pushed too high at the expense of learning a good mean embedding.At the beginning of training, ξ_i,j roughly are on the same scale among all pairs (i,j)'s. During this time, all components learn the signals from the word occurrences equally. As training progresses and the semantic representation of each mixture becomes more clear, there can be one term ofξ_i,j's that is predominantly higher than other terms, giving rise to a semantic pair that is most related. §.§.§ Probability Product KernelIn general, the probability product kernel K_ρ(f,g) = ∫ f(x)^ρ g(x)^ρ dx for ρ >0 between two Gaussians are:ξ_i,j^ρ≡log K_ρ(f_i,g_j) = (1-2 ρ) D/2log (2 π)- D/2log(ρ) + log[ Σ_f,i^ρ-1Σ_g,j^ρ + Σ_f,i^ρΣ_g,j^ρ-1]- ρ/2 (μ_f,i - μ_g,j ) (Σ_f,i + Σ_g,j)^-1 (μ_f,i - μ_g,j )For mixture of Gaussians, we have log E_θ^ρ (f,g) = ∑_i=1^K∑_j=1^K (p_i q_j)^ρ e^ξ_i,j Note that for the case where ρ =1, we recover the expected likelihood kernel in Section <ref>§.§.§ Other Energy FunctionsThe negative KL divergence is another sensible choice of energy function, providing an asymmetric metric between word distributions. However, unlike the expected likelihood kernel, KL divergence does not have a closed form if the two distributions are Gaussian mixtures. § EXPERIMENTSWe have introduced a model for multi-prototype embeddings, which expressively captures word meanings with whole probability distributions. We show that our combination of energy and objective functions, proposed in Section <ref>, enables one to learn interpretable multimodal distributions through unsupervised training, for describing words with multiple distinct meanings.By representing multiple distinct meanings, our model also reduces the unnecessarily large variance of a Gaussian embedding model, and has improved results on word entailment tasks.To learn the parameters of the proposed mixture model, we train on a concatenation of two datasets: UKWAC (2.5 billion tokens) and Wackypedia(1 billion tokens) <cit.>.We discard words that occur fewer than 100 times in the corpus, which results in a vocabulary size of 314,129 words. Our word sampling scheme, described at the end of Section <ref>, is similar to that of word2vec with one negative context word for each positive context word. After training, we obtain learned parameters {μ⃗_w,i, Σ_w,i, p_i}_i=1^K for each word w.We treat the mean vector μ⃗_w,i as the embedding of the i^th mixture componentwith the covariance matrix Σ_w,i representing its subtlety and uncertainty. We perform qualitative evaluation to show that our embeddings learn meaningful multi-prototype representations and compare to existing models using a quantitative evaluation on word similarity datasets and word entailment.We name our model as Word to Gaussian Mixture (w2gm) in constrast to Word to Gaussian (w2g) <cit.>. Unless stated otherwise, w2g refers to our implementation of w2gm model with one mixture component.§.§ HyperparametersUnless stated otherwise, we experiment with K=2 components for the w2gm model, but we have results and discussion of K=3 at the end of section 4.3.We primarily consider the spherical case for computational efficiency. We note that for diagonal or spherical covariances, the energy can be computed very efficiently since the matrix inversion would simply require 𝒪(d) computation instead of 𝒪(d^3) for a full matrix.Empirically, we have found diagonal covariance matrices become roughly spherical after training.Indeed, for these relatively high dimensional embeddings, there are sufficient degrees of freedom for the mean vectors to be learned such that the covariance matrices need not be asymmetric. Therefore, we perform all evaluations with spherical covariance models. Models used for evaluation have dimension D=50 and use context window ℓ = 10 unless stated otherwise. We provide additional hyperparameters and training details in the supplementary material (<ref>).§.§ Similarity MeasuresSince our word embeddings contain multiple vectors and uncertainty parameters per word, we use the following measures that generalizessimilarity scores.These measurespick out the component pair with maximum similarity and therefore determine the meanings that are most relevant. §.§.§ Expected Likelihood KernelA natural choice for a similarity score is the expected likelihood kernel, an inner product between distributions,which we discussed in Section <ref>.This metric incorporates the uncertainty from the covariancematrices in addition to the similarity between the mean vectors.§.§.§ Maximum Cosine SimilarityThis metric measures the maximum similarity of mean vectors among all pairs of mixture components between distributions f and g. That is, d(f,g) = max_i,j= 1, , K⟨μ_f,i, μ_g,j⟩/ ||μ_f,i|| · || μ_g,j ||, which corresponds to matching the meanings of f and g that are the most similar. For a Gaussian embedding, maximum similarity reduces to the usual cosine similarity. §.§.§ Minimum Euclidean DistanceCosine similarity is popular for evaluating embeddings.However, our training objective directly involves the Euclidean distance in Eq. (<ref>), as opposed to dot product of vectors such as in word2vec. Therefore, we also consider the Euclidean metric: d(f,g) = min_i,j= 1, , K [|| μ_f,i -μ_g,j|| ]. §.§ Qualitative EvaluationIn Table <ref>, we show examples of polysemous words and their nearest neighbors in the embedding space to demonstrate that our trainedembeddingscapture multiple word senses. For instance, a word such as `rock' that could mean either `stone' or `rock music' should have each of its meanings represented by a distinct Gaussian component. Our results for a mixture of two Gaussians model confirm this hypothesis, where we observe that the 0^th component of `rock' being related to (`basalt', `boulders') and the 1^st component being related to (`indie', `funk', `hip-hop'). Similarly, the word bank has its 0^th component representing the river bank and the 1^st component representing the financial bank.By contrast, in Table <ref> (bottom), see that for Gaussian embeddings with one mixture component, nearest neighbors of polysemous words are predominantly related to a single meaning. For instance, `rock' mostly hasneighbors related to rock music and `bank' mostly related to the financial bank. The alternative meanings of these polysemous words are not well represented in the embeddings.As a numerical example, the cosine similarity between `rock' and `stone' for the Gaussian representation of <cit.> is only 0.029, much lower than the cosine similarity 0.586 between the 0^th component of `rock' and `stone' in our multimodal representation.In cases wherea word only has a single popular meaning, the mixture components can be fairly close; for instance, one component of `stone' is close to (`stones', `stonework', `slab') and the other to (`carving, `relic', `excavated'), which reflects subtle variations in meanings. In general, the mixture can give properties such as heavy tails and more interesting unimodal characterizations of uncertainty than could be described by a single Gaussian. Embedding VisualizationWe provide an interactive visualization as part of our code repository: <https://github.com/benathi/word2gm#visualization> that allows real-time queries of words' nearest neighbors (in thetab) for K=1, 2, 3 components. We use a notation similar to that of Table <ref>, where a token w:irepresents the component i of a word w. For instance, if in theK=2 link we search for bank:0, we obtain the nearest neighbors such as river:1, confluence:0, waterway:1, which indicates that the 0^th component of `bank' has the meaning `river bank'. On the other hand, searching for bank:1 yields nearby words such as banking:1, banker:0, ATM:0, indicating that this component is close to the `financial bank'.We also have a visualization of a unimodal (w2g) for comparison in the K=1 link.In addition, the embedding link for our Gaussian mixture model with K=3 mixture components can learnthree distinct meanings.For instance, each of the three components of`cell' is close to (`keypad', `digits'), (`incarcerated', `inmate') or (`tissue', `antibody'), indicating that the distribution captures the concept of `cellphone', `jail cell', or `biological cell', respectively.Due to the limited number of words with more than 2 meanings, our model with K=3 does not generally offer substantial performance differences to our model with K=2; hence, we do not further display K=3 results for compactness.§.§ Word Similarity Weevaluate our embeddings on several standard word similarity datasets, namely, SimLex <cit.>, WS or WordSim-353, WS-S (similarity), WS-R (relatedness) <cit.>, MEN <cit.>, MC <cit.>, RG <cit.>, YP <cit.>, MTurk(-287,-771) <cit.>, and RW <cit.>. Each dataset contains a list of word pairs with a human score of how related or similar the two words are.We calculate the Spearman correlation <cit.> between the labels and our scores generated by the embeddings. The Spearman correlation is a rank-based correlation measure that assesses how well the scores describe the true labels.The correlation results are shown in Table <ref> usingthe scores generated from the expected likelihood kernel, maximum cosine similarity, and maximum Euclidean distance.We show the results of our Gaussian mixture model and compare the performance with that of word2vec and the original Gaussian embedding by <cit.>. We note that our model of a unimodal Gaussian embeddingw2galso outperforms the original model, which differs in model hyperparameters and initialization, for most datasets. Our multi-prototype model w2gm alsoperformsbetter than skip-gram or Gaussian embedding methods on many datasets, namely, WS, WS-R, MEN, MC, RG, YP, MT-287, RW. The maximum cosine similarity yields the best performance on most datasets; however, the minimum Euclidean distance is a better metric for the datasets MC and RW. These results are consistent for both the single-prototype and the multi-prototype models.We also compare out results on WordSim-353 with the multi-prototype embedding method by <cit.> and <cit.>, shown in Table <ref>. We observe that our single-prototype model w2g is competitive compared to models by <cit.>, even without using a corpus with stop words removed. This could be due to the auto-calibration of importance via the covariance learning which decrease the importance of very frequent words such as `the', `to', `a', etc. Moreover, our multi-prototype model substantially outperforms the model of <cit.> and the MSSG model of <cit.> on the WordSim-353 dataset.§.§ Word Similarity for PolysemousWordsWe use the dataset SCWS introduced by <cit.>, where word pairs are chosen to have variations in meanings of polysemous and homonymous words. We compare our method with multiprototype models by Huang <cit.>, Tian <cit.>,Chen <cit.>, and MSSG model by <cit.>. We note that Chen model uses an external lexical source WordNet thatgives it an extra advantage.We use many metrics to calculate the scores for the Spearman correlation. MaxSim refers to the maximum cosine similarity.AveSim is the average of cosine similarities with respect to the component probabilities.In Table <ref>, the model w2g performs the best among all single-prototype models for either 50 or 200 vector dimensions.Our model w2gm performs competitively compared to other multi-prototype models.In SCWS, the gain in flexibility in moving to a probability density approach appears to dominate over the effects of using a multi-prototype.In most other examples, we see w2gm surpass w2g, where the multi-prototype structure is just as important for good performance as the probabilistic representation.Note that other models also use AvgSimC metric which uses context information which can yield better correlation <cit.>. We report the numbers using AvgSim or MaxSim from the existing models which are more comparable to our performance with MaxSim.§.§ Reduction in Variance of Polysemous WordsOne motivation for our Gaussian mixture embedding is to model word uncertainty more accurately than Gaussian embeddings, which can have overly large variances for polysemous words (in order to assign some mass to all of the distinct meanings).We see that our Gaussian mixture model does indeed reduce the variances of each component for such words.For instance, we observe that the word rock in w2g has much higher variance per dimension (e^-1.8≈ 1.65) compared to that ofGaussian components of rock in w2gm (which has variance of roughly e^-2.5≈ 0.82). We also see, in the next section, that w2gm has desirable quantitative behavior for word entailment. §.§ Word Entailment We evaluate our embeddings on the word entailment dataset from <cit.>. The lexical entailment between words is denoted by w_1w_2 which means that all instances ofw_1 are w_2. The entailment dataset contains positivepairs such as aircraftvehicle and negative pairs such as aircraftinsect.We generate entailment scores of word pairs and find the best threshold, measured by Average Precision (AP) or F1 score, which identifies negative versus positive entailment. We use the maximum cosine similarity and the minimum KL divergence, d(f,g) = min_i,j= 1, , K KL(f || g), for entailment scores. The minimum KL divergence is similar to the maximum cosine similarity, but also incorporates the embedding uncertainty. In addition, KL divergence is an asymmetric measure, which is more suitable for certain tasks such as word entailment where a relationship is unidirectional. For instance, w_1w_2 does not imply w_2w_1.Indeed, aircraftvehicle does not imply vehicleaircraft, since all aircraft are vehicles but not all vehicles are aircraft.The difference between KL(w_1 || w_2) versus KL(w_2 || w_1) distinguishes which word distribution encompasses another distribution, as demonstrated in Figure <ref>.Table <ref> shows the results of our w2gm model versus the Gaussian embedding model w2g. We observe a trend for both models with window size 5 and 10that theKL metric yields improvement (both AP and F1) over cosine similarity.In addition, w2gm generally outperforms w2g. The multi-prototype model estimates the meaning uncertainty better since it is no longer constrained to be unimodal, leading to better characterizations of entailment. On the other hand, the Gaussian embedding model suffers from overestimatating variances of polysemous words, which results in less informative word distributions and reduced entailment scores.§ DISCUSSIONWe introduced a model that represents words with expressive multimodal distributions formed from Gaussian mixtures.To learn the properties of each mixture, we proposed an analytic energy function for combination with a maximum margin objective.The resulting embeddings capture different semantics of polysemous words, uncertainty, and entailment, and also perform favorably on word similarity benchmarks.Elsewhere, latent probabilistic representations are proving to be exceptionally valuable, able to capture nuances such as face angles with variational autoencoders <cit.> or subtleties in painting strokes with the InfoGAN <cit.>.Moreover, classically deterministic deep learning architectures are actively being generalized to probabilistic deep models, for full predictive distributions instead of point estimates, and significantly more expressive representations <cit.>.Similarly, probabilistic word embeddings can capture a range of subtle meanings, and advance the state of the art.Multimodal word distributions naturally represent our belief that words do not have single precise meanings: indeed, the shape of a word distribution can express much more semantic information than any point representation.In the future, multimodal word distributions could open the doors to a new suite of applications in language modelling, where whole word distributions are used as inputs to new probabilistic LSTMs, or in decision functions where uncertainty matters.As part of this effort, we can explore different metrics between distributions, such as KL divergences, which would be a natural choice for order embeddings that model entailment properties.It would also be informative to explore inference over the number of components in mixture models for word distributions.Such an approach could potentially discover an unbounded number of distinct meanings for words, but also distribute the support of each word distribution to express highly nuanced meanings.Alternatively, we could imagine a dependent mixture model where the distributions over words are evolving with time and other covariates.One could also build new types of supervised language models, constructed to more fully leverage the rich information provided by word distributions.§.§ AcknowledgementsWe thank NSF IIS-1563887 for support.acl17-latex/acl_natbib§ SUPPLEMENTARY MATERIAL §.§ Derivation of Expected Likelihood KernelWe derive the form of expected likelihood kernel for Gaussian mixtures. Let f,g be Gaussian mixture distributions representing the words w_f, w_g. That is,f(x) = ∑_i=1^K p_i (x; μ_f,i , Σ_f,i ) and g(x)=∑_i=1^K q_i (x; μ_g,i , Σ_g,i ), ∑_i =1^K p_i = 1, and ∑_i =1^K q_i = 1. The expected likelihood kernel is given by E_θ(f,g) = ∫( ∑_i=1^K p_i (x; μ_f,i , Σ_f,i ) ) ·( ∑_j=1^K q_j (x; μ_g,j , Σ_g,j ) )d x = ∑_i=1^K ∑_j=1^Kp_i q_j ∫(x; μ_f,i , Σ_f,i ) ·(x; μ_g,j , Σ_g,j )d x = ∑_i=1^K ∑_j=1^Kp_i q_j (0; μ_f,i - μ_g,j , Σ_f,i + Σ_g,j ) = ∑_i=1^K ∑_j=1^K p_i q_je^ξ_i,jwhere we note that ∫(x; μ_i, Σ_i) (x; μ_j, Σ_j)dx = (0, μ_i - μ_j , Σ_i + Σ_j) <cit.> and ξ_i,j is the log partial energy, given by equation  <ref>. §.§ Implementation In this section we discuss practical details for training the proposed model. §.§.§ Reduction to Diagonal Covariance We use a diagonal Σ, in which case inverting the covariance matrix is trivial and computations are particularly efficient. Let d^f, d^g denote the diagonal vectors of Σ_f, Σ_g The expression for ξ_i,j reduces toξ_i,j = - 1/2∑_r=1^D log (d^p_r + d^q_r)- 1/2∑[ (μ_p,i - μ_q,j)∘1/d^p + d^q∘ (μ_p, i - μ_q,j) ]where ∘ denotes element-wise multiplication. The spherical case which we use in all our experiments is similar since we simply replace a vector d with a single value.§.§.§ Optimization Constraint and Stability We optimizelogd since each component of diagonal vector d is constrained to be positive. Similarly, we constrain the probability p_i to be in [0,1] and sum to 1 by optimizing over unconstrained scores s_i ∈ (-∞, ∞) and using a softmax function to convert the scores to probability p_i = e^s_i/∑_j=1^K e^s_j. The loss computation can be numerically unstable if elements of the diagonal covariances are very small, due to the term log (d^f_r + d^g_r) and 1/d^q + d^p. Therefore, we add a small constant ϵ = 10^-4 so that d^f_r + d^g_r and d^q + d^p becomes d^f_r + d^g_r + ϵ and d^q + d^p + ϵ. In addition, we observe that ξ_i,j can be very small which would result in e^ξ_i,j≈ 0 up to machine precision. In order to stabilize the computation in eq. <ref>, we compute its equivalent formlog E(f,g) = ξ_i',j' +log∑_j=1^K ∑_i=1^K p_i q_j e^ξ_i,j - ξ_i',j'where ξ_i',j' = max_i,jξ_i,j. §.§.§ Model Hyperparameters and Training Details In the loss function L_θ, we use a margin m= 1 and a batch size of 128.We initialize the word embeddings with a uniform distribution over [ -√(3/D), √(3/D) ] so that the expectation of variance is 1 and the mean is zero <cit.>. We initialize each dimension of the diagonal matrix (or a single value for spherical case) with a constant value v = 0.05. We also initialize the mixture scores s_i to be 0 so that the initial probabilities are equal among all K components. We use the threshold t = 10^-5 for negative sampling, which is the recommended value for word2vec skip-gram on large datasets.We also use a separate output embeddings in addition to input embeddings, similar to word2vec implementation <cit.>. That is, each word has two sets of distributions q_I and q_O, each of which is a Gaussian mixture. For a given pair of word and context (w,c), we use the input distribution q_I for w (input word) and the output distribution q_O for context c (output word). We optimize the parameters of both q_I and q_O and use the trained input distributions q_I as our final word representations.We use mini-batch asynchronous gradient descent with Adagrad <cit.> which performs adaptive learning rate for each parameter. We also experiment with Adam <cit.> which corrects the bias in adaptive gradient updateof Adagrad and is proven very popular for most recent neural network models. However, we found that it is much slowerthan Adagrad (≈ 10 times). This is because the gradient computation of the model is relatively fast, so a complex gradient update algorithm such as Adam becomes the bottleneck in the optimization. Therefore, we choose to use Adagrad which allows us to better scale to large datasets. We use a linearly decreasing learning rate from 0.05 to 0.00001.
http://arxiv.org/abs/1704.08424v2
{ "authors": [ "Ben Athiwaratkun", "Andrew Gordon Wilson" ], "categories": [ "stat.ML", "cs.AI", "cs.CL", "cs.LG" ], "primary_category": "stat.ML", "published": "20170427035954", "title": "Multimodal Word Distributions" }
apj Yesuf et al.1University of California Observatories and the Department of Astronomy and Astrophysics, University of California, Santa Cruz, CA 95064, USA 2College of Physical Science and Technology, Shenyang Normal University, Shenyang 110034, China 3Center for Astrophysics and Space Sciences, Department of Physics, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093, USAWe study winds in 12 X-ray AGN host galaxies at z ∼ 1. We find, using the low-ionization Fe2 λ2586 absorption in the stacked spectra, that the probability distribution function (PDF) of the centroid velocity shift in AGN has a median, 16th and 84th percentilesof(-87, -251, +86) km s^-1 respectively. The PDF of the velocity dispersion in AGN has a median, 84th and 16th percentile of (139, 253, 52) km s^-1 respectively. The centroid velocity and the velocity dispersions are obtained from a two component (ISM+wind) absorption line model. The equivalent width PDF of the outflow in AGN has median, 84th and 16th percentiles of (0.4, 0.8, 0.1)Å. There is a strong ISM component in Fe2 λ 2586 absorption with (1.2, 1.5, 0.8) Å, implying presence of substantial amount cold gas in the host galaxies. For comparison, star-forming and X-ray undetected galaxies at a similar redshift, matched roughly in stellar mass and galaxy inclination, have a centroid velocity PDF with percentiles of (-74, -258, +90) km s^-1, and a velocity dispersion PDF percentiles of (150, 259, 57) km s^-1. Thus, winds in the AGN are similar to star-formation-driven winds, and are too weak to escape and expel substantial cool gas from galaxies. Our sample doubles the previous sample of AGN studied at z ∼ 0.5 and extends the analysis to z ∼ 1. A joint reanalysis of thez ∼ 0.5 AGN sample and our sample yields consistent results to the measurements above. § INTRODUCTION Galactic scale winds are one of the most fundamental, yet least understood, facets of galaxy evolution. They are recognized to be fundamental in shaping the baryonic growth, dark matter density profile, star formation and metallicity of galaxies as well as the enrichment of the intergalactic medium <cit.>. Galaxy formation models that do not include feedback processes form stars too efficiently and fail to reproduce even basic observed galaxy properties. High velocity winds are predicted manifestations of the AGN feedback process invoked to reproduce observed properties of massive galaxies <cit.>. In AGN feedback process, a tremendous energy output from accretion onto a black hole, if somehow harnessed, removes or heats gas in the host galaxyand shuts down subsequent star formation. The consequence of the black hole's action in turn limits gas accretion onto the black hole and stunts its growth. AGN feedback is an essential ingredient in current theoretical models of massive galaxy evolution <cit.>. Many semi-analytical models and theoretical simulations require AGN feedback to correctly predict the observed color bi-modality of galaxies and the lack of extremely luminous galaxies <cit.>. It is predicted that outflows driven by stellar feedback alone are less likely to reach typical velocities higher than 500 km s^-1 <cit.>. In their simulations of stellar feedback (without AGN feedback) in major mergers, <cit.> found that in all cases the winds have a broad velocity distribution extending up to ∼ 1000 km s^-1, but most of the wind mass is near the circular velocity (∼ 100-200 km s^-1), with relatively little (≪ 1% of the wind mass) at v ≥ 500 km s^-1. <cit.> also found winds with similar properties in their analysis of galaxy-scale outflows from the Feedback in Realistic Environments (FIRE) cosmological simulations. In contrast, considerably higher bulk outflow velocities v ∼ 1000-3000 km s^-1 are predicted from AGN feedback <cit.>. In observations, in particular in those that indirectly infer the bulk velocity, the high velocity tail of the star formation-driven wind could be confused for a wind powered by AGN.However, other models predict that AGN feedback affects a galaxy very little, despite the large outflow velocities <cit.> or it could even enhance star formation <cit.>.<cit.> claim that the mass resolution of a simulation significantly affects the inferred AGN feedback. At resolutions typical of cosmological simulations, they found that simulated AGN are artificially more efficient in gas removal.Yet at a higher resolution, the authors found that simulated AGN expel only diffuse gas, and a denser gas falls in towards the black hole and forms stars. Thus, it is not clear whether AGN have negative or positive feedback or both happening simultaneously <cit.> or on different timescales. A consistent and unified theoretical picture on the role of AGN outflows in galaxy evolution is still lacking and observations need to inform and aid the theoretical developments.Recent observational studies at z ∼ 0.5-2.5, using the background light of star forming galaxies in self-absorption, have observed ubiquitous velocity offsets from the the systemic zero velocities of the galaxies, indicative of galactic winds <cit.>. Using the background light of galaxies gives a clearer indication of inflow or outflow unlike using bright background QSOs to probe gas associated with the foreground galaxies <cit.>. Since galaxies are much fainter, the analysis is performed on stacks of hundreds of short exposure galaxy spectra or on very deep spectra of a modest sample of individual galaxies (3-8 hr integration on Keck for instance).The outflows studied in both ways show asymmetric absorption profiles with a typical velocity offset of ∼ 200 km s^-1 and a high velocity tail which may reach up to ∼ 1000 km s^-1 <cit.>. Note that this high velocity tail in star-forming non-AGN galaxies is observed more prominently in Mg2 λ2796 and it is consistent with the theoretical expectation <cit.>. In the latest works that used deep spectroscopic data of individual galaxies, the wind speed is best correlated with SFR surface density but it is not significantly correlated with either galaxy stellar mass or inclination <cit.>. The wind detection rate on the other hand is highly dependent on inclination <cit.>. While the recent works have made important advances in characterizing such outflows and in establishing their relationships to host galaxy properties, many basic properties of these winds and their driving physics remain uncertain. One such uncertainty is the wind velocity of AGN host galaxies.The evidence for AGN hosts having ubiquitous high mean galaxy-wide velocity outflows with the potential to impact star formation is sparse (more detailed discussion in <ref>). Ionized outflows have been studied in emission using large samples both at low-redshift <cit.> and high-redshift <cit.>. Even though, convincing evidence for ubiquitous, ionized outflows exists, details on the interpretation of the observed wind properties are debatable. Most of the emission-line studies have found high-velocity, extended outflows on several kpc scale, resulting in very large inferred mass outflow rates and kinetic power, in support of AGN feedback models <cit.>. However, recent studies have questioned these results and have argued that the apparently very extended emission is a consequence of seeing smearing <cit.>. Accounting for the seeing effect, these later works found much smaller and weaker winds, which may not significantly impact the star formation in their host galaxies.On the other hand, absorption line wind studies are hard to undertake in distant galaxies but are relatively easy to interpret. Existing absorption line studies of winds in AGN host galaxies have small sample sizes ∼ 10-30 at z ∼ 0.5-2.5 <cit.> and the aforementioned deep spectroscopic wind absorption studies did not primarly target AGN. This may be because AGN hosts are rare and are generally fainter than the targeted star-forming galaxies. <cit.> studied a sample of 10 low-luminosity, narrow-line AGN (L_X∼ 10^41-42 erg s^-1) at 0.2 <z < 0.6. Five of the ten X-ray AGN host galaxies show a wind in Fe2 λ 2586 absorption, with a typical mean outflow velocity signatures of only ∼ 200 to 300 km s^-1. The velocity widths are generally unresolved but are, ∼ 100-300 km s^-1. On the other hand, <cit.> qualitatively studied a stacked spectrum of 33 UV-selected narrow-line AGN at z ∼ 2.5 and reported a detection of a highly blue-shifted (v ∼ -850 km s^-1) and weak Si4λ1393,1402 absorption line, which is different from the Si4 absorption in the composite spectrum of non-AGN Lyman break galaxies at a similar redshift.It should be noted that <cit.> have found that atomic transitions that are only coupled to the ground state (e.g., Si4, Mg2, Na1) have line emission preferentially at the systemic velocity and their observed absorption profiles are significantly reduced in depth as well as are significantly offset in velocity from the intrinsic profile. On the hand, they also found that resonance transitions that are strongly coupled to fine-structure transitions (e.g., Fe2 and Si2) dominantly produce florescent emissions at longer wavelength which do not affect the absorption profiles. These resonance absorption lines offer the best characterization of the opacity of the wind as well as the opacity of the gas near the systemic velocity. Therefore, the discrepancy between the two previous works on AGN winds may be due to this effect. The AGN sample in <cit.> show stronger Si4 λ1393,1402 emission near the systemic and have much weaker absorption than do their non-AGN star-forming counterparts. In the follow up work, the same authors showed that the stellar population properties of their AGN sample are consistent with those of the mass-matched control sample of star-forming galaxies. They inferred that the presence of an AGN is not connected with the cessation of star formation activity in star-forming galaxies at z ∼ 2-3 <cit.>. In other words, the observed high winds in these AGN have not yet affected star formation even if the measured mean velocity is reliable.The work in this paper bridges the gap in redshift between the two major previous studies of AGN winds in absorption and aims to independently confirm the previously reported wind velocities in AGN. We examine winds in a composite spectrum of 12 X-ray selected AGN atz=0.9 - 1.5 or that of a comparison sample of star-forming galaxies using Fe2 λ2586, a preferredwind diagnostic. Our AGN sample has a comparable X-ray luminosity to that of <cit.>. Our spectral resolution is two times higher than theirs and we have three times more wavelength sampling. Our sample also has more extensive multi-wavelength deep HST photometry and other ancillary data to better characterize the host galaxy properties. This has enabled a first attempt to have a control sample for star-forming galaxies matched both in stellar mass and galaxy inclination. Furthermore, we use similar wind model and methods that have been adopted in the most recent wind studies <cit.>. These methods were not used in the two previous studies of AGN. Thus, for a fair comparison, we present a reanalysis of the <cit.> data using the new approach, which also has better quantified model uncertainties.The rest of the paper is organized as follows: Section <ref> presents the data and sample selection. Section <ref> presents the analysis and results on winds in AGN at z ∼ 1, AGN at z ∼ 0.5 and the comparison sample. Section <ref> extensively discusses previous works to put the result of this work in a larger context. A brief summary and conclusion is given in section <ref> . (Ω_m,Ω_Λ,h) = (0.3,0.7,0.7) cosmology is assumed and AB magnitude is adopted. A wavelength measured in air is given throughout the paper. We use words “wind” and “outflow” interchangeably to mean outward movement of gas without making subtle distinctions in some previous works. § DATA§.§ Observation & data reduction The spectroscopic data are taken from our on going deep (8–24 hr) Keck/DEIMOS <cit.> spectroscopic survey in CANDELS fields <cit.> called HALO7D. This multi-semester program will survey faint halo stars with HST-measured proper motions, to measure their line-of-sight velocities and chemical abundances, giving 6D phase-space information and chemical abundances for hundreds of Milky Way halo stars. The deep exposures necessary to reach the faintest stars in the Galaxy halo is an opportunity for a novel synergy of extragalactic and Galactic science. In addition to the primary halo star targets, which only occupy about a quarter of slitlets on a given DEIMOS mask, we are conducting a survey of galactic winds in star-forming galaxies at z ∼ 1, and stellar populations of quiescent galaxies at redshifts 0.4 < z < 0.8. A total of about ∼ 1500 deep spectra of galaxies are expected with survey completion in a year. The HALO7D survey uses the 600 line/mm grating on DEIMOS centered around 7200Å with the GG455 order-blocking filter. This setup gives a nominal wavelength coverage of 4600-9500Å at a resolution (FWHM) of ∼ 3.5Å for a 1^'' slit width and 0.65Å/pixel dispersion. The slit position angles are set to within ± 30^∘ of the parallactic angle to minimize light loss in the blue due to atmospheric dispersion. The exposure times for AGN studied in this work range between 5-12 hr and the observations were taken over the course of two years under variable seeing (0.8-1.2^'') and fair to good transparency conditions. Despite the long integration times, poor seeing has significantly lowered the signal-to-noise for almost half of the AGN sample in the current work. Additional make-up observations of these AGN are scheduled.The observations were reduced using the automated DEEP2/DEIMOS spec2d pipeline developed by DEEP2 team <cit.>. Calibrations were done using a quartz lamp for flat fielding and both red NeKrArXe lamps and blue CdHgZn lamps for wavelength calibration. The spectroscopic redshifts were measured from the reduced spectra using thespec1d pipeline. All spectra were visually inspected using the interactive zspec tool to access the quality of the redshift estimated by spec1d <cit.>. Almost all galaxies studied in this work have previous spectroscopic measurements and the new spectroscopic redshifts imply minor changes if any.Based on the available redshifts, stellar masses and other stellar population properties (age, extinction A_V, etc.) were computed with FAST <cit.> using a combination the newly obtained CANDELS HST/WFC3 multi-wavelength photometric data with existing ground-based and space-based multi-wavelength data[The following filters are used in the SED fitting : In EGS: CFHT (u, g, r, i, z), ACS (F606W, F814W), WFC3 (F125W, F140W, F160W), WIRCAM (J, H, K), NEWFIRM (J1, J2, J3, H1, H2, K), IRAC (CH1, CH2, CH3,CH4).In GOODS-N : KPNO_U, LBC_U, ACS (F435W, F606W, F775W, F814W, F850LP), WFC3 (F105W, F125W, F140W, F160W, F275W), MOIRCS_K, CFHT_K, IRAC (CH1,CH2,CH3,CH4). In GOODS-S: CTIO_U,VIMOS_U, ACS (F435W, F606W, F775W, F814W, F850LP), WFC3 (F098M, F105W, F125W, F160W), ISAAC_KS, HAWAKI_KS, IRAC (CH1,CH2,CH3,CH4).] as inputs <cit.>. The modeling is based on a <cit.> stellar population synthesis model and assumes a <cit.> IMF, exponentially declining star formation histories, solar metallicity, and a <cit.> dust extinction law. The typical uncertainty in stellar masses is ∼ 0.1 dex. The star-formation rates (SFR) are the sum of the SFR_UV, derived from the rest-frame near UV luminosity at 2800Å, and the SFR_IR, derived from the total infrared luminosity. If a galaxy is not detected in infrared, its dust-corrected UV-based star formation rate is used <cit.>. The SFR estimates are uncertain by a factor of ≲ 2. The axis-ratio measurements were done on HST/WFC3 F160W (H) band imaging using GALFIT <cit.>. For comparison, we also used data of 6 previously studied low-luminosity AGN at 0.3 < z < 0.6 with Fe2 coverage <cit.>. These AGN were observed using Keck LRIS <cit.> for roughly one half hour to one hour. The average apparent brightness for this sample is B ∼ 20.9 and our AGN sample has about 9 times fainter average apparent brightness. <cit.> have stellar mass and star-formation rate measurements for four of the six AGN. We adopted their measurements. In comparison plots that use the stellar mass and star-formation rate measurements, we only show their four AGN with such measurements, but we reanalyzed the spectra of all six AGN. §.§ Sample selection For the wind study, we primarily targeted sources that are brighter in the V band (V < 23.5) and above z > 0.7, such that we would detect near UV continuum levels in the individual spectra in eight hours at a signal-to-noise ratio per Angstrom (SNR/Å) of 5 in good observing conditions. We gave higher priority to sources that are brighter and are above z > 0.9, which likely have both Fe2 andMg2 coverage. Galaxies with V > 23.5 were targeted at lower priority as fillers. AGN are a small fraction ( < 10%) of all galaxies in our survey. In some cases, we targeted bright X-ray sources <cit.> and gave them highest weights in the mask design process. In other cases, the AGN were selected by chance (i.e., were selected for other reasons). So far, there are about one hundred X-ray sources in the observed sample. Only about a third of them are above z > 0.9 and, therefore, have coverage of Fe2 λ2586. Out of the ∼ 100 targeted/serendipitous X-ray sources, we selected all 12 AGN candidates at z> 0.9 that have X-ray luminosity L_X≳ 5 × 10^41 erg s^-1, SNR/Å > 2.7 around Fe2 λ2586 and uncontaminated Fe2 λ2586 by skylines. We also require that they have HST WFC3 imaging for axis-ratio measurements and have highly reliable redshift measurements (show clear O2 emission and/or Ca2 H & K absorption lines). Figure <ref>a plots star formation rate against X-ray luminosity. Nine of the selected AGN candidates are 2σ outliers from the relationship between X-ray luminosity and star formation rates for normal star-forming galaxies <cit.>. Of these, 3 show broad Mg2 emission. When we have previous spectroscopic data, we targeted objects with broad Mg2 emission at lower priority. Quasars have hard X-ray luminosities ≳ 10^44 erg s^-1 <cit.>. Except for the 3 AGN with broad Mg2 emissions, the AGN studied in this work have significantly lower X-ray luminosities compared to quasars. We thus refer to them as low-luminosity AGN to differentiate them from quasars.Furthermore, 3 of the 12 X-ray sources lie within 2σ of the mean relation between X-ray luminosity and star formation rate. One of them has strong Ne5 λ3426 emission, a strong signature of AGN. Therefore, 10 of the AGN sample have robust AGN identifications. Excluding the two X-ray sources which may not be AGN does not change the main results of this work. We have excluded from our sample some AGN candidates which satisfy the SNR cut but their spectra around Fe2 are contaminated by a skyline or possible absorption from a foreground galaxy.The three broad-line AGN in our sample show no or very weak and narrow Fe2 and/or Mg2 absorption but have higher X-ray luminosity L_X > 10^43 erg s^-1. <cit.> did not target objects with broad Mg2 emission in their spectra, as strong broad emission may affect one's ability to detect or interpret blue-shifted absorption features because of emission infill or ionization to a higher state. The current estimates of stellar masses and star-formation rates of broad-line AGN do not properly account for the presence of the luminous AGN and thus they may be biased compared to the measurements for the star-forming comparison sample. However, for low-luminosity narrow-line AGN, the stellar mass and SFR measurements are not significantly affected by presence ofAGN <cit.>. So, we present analyses both with and without the inclusion of the three broad-line AGN.To test for the effects of the presence of AGN on winds in their host galaxies, here we define a comparison galaxy sample which lacks AGN but has a similar distribution of stellar masses and axis ratios. For each AGN host, we select the X-ray undetected object which has the most similar stellar mass and axis ratios. The masses and axis-ratios agree within a factor of two for all AGN and their comparison galaxies. All galaxies except one have matches with similar SFRs within a factor of two.Figure <ref>b-d show the stellar mass, star formation rate and axis-ratio of the AGN and the comparison sample of star-forming X-ray undetected galaxies at z ∼ 1. Due to the limited sample size of the parent sample from which the comparison sample of X-ray undetected galaxies were drawn (< 100 galaxies with z > 0.9 and V < 23.5), our matching is crude It may be sufficient since winds are expected to depend weakly on galaxy properties <cit.>. We think the comparison galaxies do not host AGN because they are located in regions of the sky with X-ray coverage by Chandra observations yet they are not detected in X-ray. However, it has been argued that a substantial fraction of AGN are missed by the X-ray selection <cit.>. Future near-IR spectroscopic data for rest-frame optical line-ratio AGN diagnostic are needed to completely rule out the presence AGN in the comparison sample.The galaxy properties of the AGN and the star-forming X-ray undetected galaxies are summarized in Tables <ref> & <ref>. The rest-fame pseudo RGB images of the two samples are shown in Figure <ref> and Figure <ref>. Pseudo-color images are created by simply combining three high-resolution HST ACS/WFC3 band cut-out images <cit.> that have central wavelength closest to R (700nm), G (546.1nm) and B (435.8nm) after correcting for redshift (i.e., λ/(1+z), z ∼ 1). The images are normalized and combined with the ratio of 1(R):6(G):3(B). lrrrcccccccccc0in z ∼ 1 AGN sampleField ID RA Dec. zlog L_Xlog MSFR b/a B V SNR t_ obs (degree) (degree) (erg s^-1)(M_⊙)(M_⊙yr^-1) mag mag(per Å)(hr) EGS 10518214.877075 52.819477 1.1950042.4 10.8660.4 24.6 24.1 3 6EGS 30572214.671600 52.7734151.4859944.2 10.5 1630.6a23.9 23.18 9GDN17041 189.282730 62.268250 0.93573 42.2 11.154 0.823.8 22.6 39 GDN17389 189.28224262.2710990.9397142.211.135 0.723.1 23.0 39GDN 1878b 189.194458 62.1425900.9711541.911.01150.922.922.1 129GDN12523 189.193115 62.234634 0.9602343.9 10.9 35 0.9 23.8 22.61011GDN6274 189.07751562.187534 1.01933 44.110.7 1460.8 22.1 21.5 16 0GDS21627 53.045460-27.728624 0.99829 42.811.045 0.6a 23.1 22.6 39 GDS783753.107758 -27.838812 1.0952541.7 10.448 0.7 23.2 22.7 912 GDS1587053.065838 -27.775131 1.0206041.910.231 0.4 23.7 23.4 109 GDS 23803 53.150719-27.7161940.9670842.210.168 0.6 22.3 22.1 7 5 GDS 20168 53.150126-27.7399261.0367041.810.0 48 1.0 23.2 22.9 3 5ahas a bad or suspicious GALFIT flag bhas a strong Ne5 emission lrrrcccc 0inz∼ 1 inactive star-forming comparison sampleField ID RA Dec. z log MSFR b/a (degree) (degree) (M_⊙)(M_⊙ yr^-1) EGS 9240215.05415352.9371031.03030 10.6 881.0EGS13622 214.953293 52.8904111.39770 10.3 1000.7EGS31460 214.708572 52.7913510.9583010.6 115 0.8EGS 25589 214.65582352.738495 1.39700 10.9128 0.8EGS 15131215.09573452.9994351.11690 10.0 450.8 EGS27292214.93148852.9451900.8933010.3 400.9EGS 12671 215.110123 52.994293 1.24070 11.02141.0GDN 17481 189.37986862.2722890.9750011.2 1020.7 GDN 11799189.21974362.2318891.3589610.2 49 0.5 GDS 25246 53.046933 -27.6908411.0576510.5 24 0.8 GDS 26255 53.144206 -27.7003371.041809.8 240.6 GDS 19443 53.086326-27.748261 0.9690010.8680.7§ ANALYSIS & RESULTS §.§ Coadding spectra First, multiple exposures of the same object were averaged with inverse-variance weighting per pixel, which minimizes the variance of the weighted average spectrum. Then to coadd the spectra of AGN or their comparison sample, each spectrum was shifted to its rest frame wavelength. The rest-frame spectra were then interpolated on a linear wavelength grid with Δλ = 0.3Å bin, which is close to the pixel size of the DEIMOS spectrograph at z∼ 1 for our setup. We co-added the observed photon counts at a given wavelength bin for each rest-frame spectrum with inverse-variance weighting.The individual near UV spectra of the redshift z ∼ 1 AGN sample are plotted in Figure <ref>. The Fe2 and Mg2 absorption lines are observed in most of the spectra. Figure <ref> shows the normalized near UV composite spectra of all AGN and subsets of the AGN sample separated into narrow and broad-line AGN. The normalization was determined by a linear fit to the continuum around both sides of the Fe2 absorption doublet. We used the wavelength ranges 2520-2578Å, 2640-2770Å, and 2900-2970Å to fit the continuum level. Figure <ref>a shows the normalized composite spectrum near the Fe2 absorption line of the 7 narrow-line AGN at z ∼ 1 with robust AGN identifications. The composite spectrum of the comparison sample of X-ray undetected, star-forming galaxies is overplotted on the same figure. It is clear from this figure that both AGN and normal star-forming galaxies have asymmetrically blue-shifted Fe2 absorption lines. The absorption profiles of AGN and the comparison sample have very similar width and depth. Therefore, without detailed modeling, one can infer that they have similar wind velocity and strength. The wind velocities, in both samples, are in the order of 100-200 km s^-1 and extend to ∼ 500 km s^-1. A variant of this figure for all 12 AGN candidates can be found in the Appendix Figure <ref>. Figure <ref>b plots the O2 emission lines of AGN and the comparison sample. O2 is vital in determining the redshift and the systemic zero velocity of the galaxies.Figure <ref>c & d are similar to Figure <ref>a & b. In these later figures, the comparison is made between the composite spectrum of our AGN at z ∼ 1 and that of the 6 AGN studied by <cit.> at z<0.6. We note that these authors studied winds in their AGN using individual spectra. Since we coadd the spectra of our AGN for improved signal-to-noise average spectrum, we also coadd the AGN at z<0.6. We interpolate the individual AGN spectra onto a linear wavelength grid with Δλ = 1.3Å, which is about the pixel size of the LRIS spectrograph at z∼ 0.6 for their setup. We convolve our composite spectrum to the instrumental resolution of LRIS and rebin it to match the resolution and bin size of the composite spectrum of their data. There is a very good agreement between the Fe2 absorption profiles of AGN at z ∼ 1 and that of the AGN at z∼ 0.5. The strength of the O2 emission lines of the two AGN samples also agree with each other.We reanalyze the <cit.> data both separately and jointly with our data. For the joint analysis, we linearly interpolate the composite spectrum of their data to a wavelength grid with Δλ = 1.2Å bin, convolve our AGN composite spectrum to the LRIS resolution but here our spectrum is not rebinned (has Δλ = 0.3Å). The two component model decomposition may be sensitive to binning and therefore we do not rebin our composite spectrum. To combine the two datasets, we average the normalized counts in the bins where the two datasets coincide (i.e., every fourth bin in our spectrum). The averaging uses inverse-variance weighting per pixel. In the bins where the two datasets do not overlap, we use values of our composite spectrum. The joint analysis is done using the composite spectrum of our 7 narrow-line AGN with robust AGN identification, which are more similar to the <cit.> sample, as well as with the composite spectrum of all AGN in our sample.To give a quantitative gauge of both the dispersion intrinsic to a sample (to account for the effects caused by the outlier galaxies within a sample) and the measurement errors, we use the bootstrapping scheme to estimate the standard errors ofthe average composite spectrum at different wavelengths. In the bootstrapping scheme, we randomly resample, with replacement, the ID of galaxies of a sample size equal to the size of the original sample size at each iteration. This resampling was done 1000 times. At each iteration, we averaged the spectra of the selected galaxies using inverse-variance weighting at each wavelength pixel. We used the standard deviation of the average counts in a given wavelength bin of all 1000 composite spectra as the error of the original averaged spectrum at that given bin. We add the bootstrap standard deviations of the composite spectra in quadrature when we combine the two AGN sample atz ∼ 1 and z ∼ 0.5. An alternative analysis which may better minimize the effects of poor measurements, provided that the dispersion intrinsic to sample is small, is to simply use the errors of inverse-variance weighted average. The results from this alternative analysis is presented in the Appendix section <ref>. Because the errors from this method are significantly smaller than the errors from the bootstrapping scheme, the wind parameters are better constrained in the results presented in the Appendix. In the next section, we quantitatively show that the winds in the two samples are similar using a standard wind model. §.§ The simple wind model We adopt the partial covering wind model of <cit.> to model the observed Fe2 λ 2585.876 [More information and references for this and other atomic transition used in the paper can be found in NIST Atomic Spectra Database at <https://www.nist.gov/pml/atomic-spectra-database> or in <cit.>.] absorption profile, similar to recent works that studied galactic winds in star-forming galaxies <cit.>. Due to signal-to-noise and spectral resolution limitations of observational data to fully constrain the wind model, the following simplifying assumptions were customarily made in the previous works and are also adopted in this work: 1) the covering factor of the wind is independent of velocity. Some studies suggest that this may not be a good assumption <cit.>; 2) the stellar continuum emission is fully covered by a uniform screen of ISM absorption. If this assumption is not valid, the inferred ISM column density is lower than the actual column density since the covering fraction anti-correlates with column density. In this work, we do not aim to constrain these two values independently. We show in the Appendix section <ref> that the wind velocities are not significantly affected by this assumption when using a covering fraction of 50% for the ISM;3) The line profile shape is due entirely to the absorption of the stellar continuum. However, scattered emission infill may also affect the absorption profile. This effect is expected to be less significant for Fe2 λ 2586 compared to resonant lines without florescent emission lines such as Mg2 <cit.>. <cit.> estimated that the equivalent width of the Fe2 λ2586 absorption profile is affected by only 10% due to emission infill; 4) Two absorption components, an ISM component centered at zero systemic velocity and a wind component, are sufficient to characterize the observed absorption line profiles. This assumption may result in inaccurate column densities and line widths, if the profiles are composed of multi-components from multiple clouds. Higher-resolution galaxy spectra are required to test the effects of this assumption. Studies using high resolution spectra of background quasars find the absorption lines are composed of multiple clouds with more complex kinematics <cit.>; 5) the velocity distribution of absorbing atoms within a component is Maxwellian such that each absorption optical depth is modeled as a Gaussian, τ(λ) = τ_c exp -(λ-λ_c)^2/( λ_c b/c)^2 whereτ_cis a central optical depth at the line center, λ_c, c is the speed of light, and b=√(2) σ is the Doppler parameter. This assumption is likely to be an over-simplification but it is reasonable given that the observed shape of the absorption trough is strongly influenced by the instrumental resolution.According to this simple model, the normalized continuum can be described as a product of the line intensity of the galaxy component and of the wind component. Each component has the form1-C (1-exp -τ(λ)). C is the covering fraction and it is set to 1 for the galaxy ISM component. We can express λ_c in terms of centroid velocity v=c(λ_c-λ_0)/λ_0, where λ_0 is the rest wavelength of the transition. For the galaxy component, λ_c=λ_0 (i.e., no velocity shift). The central optical depth is expressed in term of the column density, N, oscillator strength, f_0, and b using the relation τ_c = 1.45 × 10^-15λ_0 f_0N/b for N in unit of cm^-2, λ in Å and b in km s^-1.To summarize, the 6 free parameters of this two component model are: the covering fraction of the wind, C_w, the velocity centroid shifts of the wind, v_w, the Doppler broadening parameter of the wind, b_w, the Doppler parameter of the ISM in the galaxy, b_g, column density of the wind, N_w, and the column density of the gas in the galaxy, N_g. The model is convolved with the instrumental resolution and rebinned to match the observed data before comparing the two. The model is fit to the data using a Bayesian method with custom Python code. Only the Fe2 λ 2585.876 absorption line is fitted. The entire wavelength ranges λ=(2572, 2578) and λ=(2591,2632) are masked out prior to fitting because they are contaminated by Mn2 λ 2576.877, 2594.499, 2606.462 absorptions or Fe2 λ 2599.395 absorption or the Fe2 λ 2611.873, 2625.489, 2631.047 emissions. The posterior probability densities (PDFs) of the model parameters were computed using the affine-invariant ensemble Metropolis-Hastings sampling algorithm <cit.> assuming uniform priors: v_w =(-450, 450), b_w=(20, 450),b_g=(20,450), log N_w=(14, 17.5) and log N_g=(14, 17.5). A centroid velocity shift greater than 500 km s^-1 is ruled out by the data without the model, so we have restricted the velocity prior to estimate the velocity shift which is supported by the data. To compute the likelihood of the data given the model parameters, we assumed that each data point is drawn from independent Gaussians centered around the model profile with a dispersion given by the measurement errors. This is equivalent to assuming a χ^2 distribution for the sum of squares of flux differences between the model and the data, with degrees of freedom equal to the number of observed data points. §.§ The wind velocities ofX-ray AGN at z ∼ 1 are similar to those of star-forming X-ray undetected galaxies at similar redshifts. Table <ref> summarizes the marginalized PDFs of the six model parameters fitted to Fe2 λ2586 absorption lines for both the AGN and the comparison samples . We used the median, 16th and 84th percentiles as summary statistics for the marginalized PDFs. For convenience, we express the percentiles as ± deviations from the median throughout the text. For instance, X^+Y_-Z denotes that X is the median, X+Y is the 84th percentile and X-Z is the 16th percentile. For a Gaussian PDF, Y and Z equal to its standard deviation but note that the PDFs of the wind centroid velocity shift and of the Doppler dispersion parameter are non-Gaussian in almost all cases.Figure <ref> shows, for both AGN and the comparison sample at z ∼ 1, the observed Fe2 λ2586 absorption profiles and the fitted model profiles. In the top row, we show the fit to the composite spectrum of all 12 AGN candidates (Figure <ref>a) or of only the 9 AGN without broad-line AGN (Figure <ref>b) or only the 7 narrow-line AGN with robust AGN identifications (Figure <ref>c). In the second row, Figure <ref>d-f show the corresponding velocity centroid and Doppler parameter PDFs for the three AGN subsamples. Figure <ref>g & h show similar figure for <cit.> z ∼ 0.5 AGN sample analyzed separately or jointly with our data. Figure <ref>i shows observedFe2 λ2586 absorption profiles and the model profiles for the comparison sample of X-ray undetected star-forming galaxies at z ∼ 1. In last row, Figure <ref>j-l show the velocity centroid and Doppler parameter PDFs of two reanalyses of the <cit.> data and of the comparison sample. In each plot of the Fe2 profile, the black points with error bars are the observed data points.The blue curve is the wind component while the orange curve is the galaxy absorption component. Both the blue and orange curves are shown before the convolution with the instrumental line-spread-function and thus represent the true components. The red curve is the product of the two components after convolution. All three curves are constructed from medians of the six marginalized model parameters. The marginalized median model fits the data well. The randomly drawn 500 model profiles from the PDFs of the model parameters are shown in pink. They also characterize the flux uncertainties very well. Cutting the histograms depicting PDFs of the centroid velocities and Doppler parameters, the dashed vertical lines mark the 16th, 50th (median), and 84th percentiles of the PDFs.To summarize results presented in Table <ref> and Figure <ref>, the wind centroid-velocity for all 12 AGN is Fe2 λ 2586, is v_w = -87^+173_-164 km s^-1 and its Doppler dispersion parameter is b_w = 197^+161_-124 km s^-1. For the 9 narrow-line AGN the centroid-velocity is,v_w = -109^+111_-129 km s^-1 and its Doppler parameter is b_w = 160^+142_-98 km s^-1. These two parameters anti-correlate and their joint PDF is asymmetric. v_w may also be degenerate with the covering factor of the wind (see Figure <ref>). The value of v_w quoted above is after integrating over b_w. The maximum, centroid, velocity-shift (v_w-2σ_v_w, where σ_v_w is the dispersion of v_w estimated from its PDF) in the narrow-line AGN is likely less than ∼ 370 km s^-1. Similarly, the star-forming comparison sample has a wind centroid velocity of -74^+167_-184 km s^-1 and a Doppler parameter of 212^+154_-131 km s^-1. Therefore, the velocity profiles of winds in AGN at z ∼ 1 overlap those of the comparison sample at similar redshifts.Even though the comparison sample has higher SFRs than typical galaxies at a similar redshift, its wind properties are similar to what was found in typical galaxies at z ∼ 1.For instance, <cit.> found an Fe2 centroid velocity of -119 ± 6 km s^-1 in several tens of star-forming galaxies at at z ∼ 1. The escape velocity from a galaxy is approximately 5-6 × its O2 emission line velocity dispersion <cit.>, which is 122 ± 16 km s^-1 for the AGN sample and and is 131 ± 18 km s^-1 for the comparison sample (see Appendix section <ref> and Figure <ref> therein for detailed information on the measurement of the velocity dispersion). Therefore, most of the outflowing gas does not escape from the host galaxies. Incidentally, the galaxy (ISM) velocity dispersion inferred from fitting the Fe2 λ 2586 absorption profile is consistent with the O2 emission-line, velocity-dispersion in both AGN and the comparison samples. The total equivalent width (the combined contribution of the ISM and the wind components) is 1.6^+0.2_-0.3Å for all 12 AGN, 1.8^+0.2_-0.3Å for the 9 narrow-line AGN and 1.9^+0.4_-0.3Å for the star-forming sample. About a third of the total equivalent width is due to the wind component in both samples (i.e., 0.4^+0.4_-0.3Å for all AGN and 0.5^+0.7_-0.3Å for star-forming sample). The presence of strong ISM component (1.2^+0.3_-0.4Å) in the AGN implies that substantial amount cold gas is present in the host galaxies and it has not been affected by AGN feedback. The maximum wind velocity (v_w-2 b_w/√(2)) is -234^+153_-194 km s^-1 for the 12 AGN while it is -228^+177_-227km s^-1 for the comparison sample. The equivalent width and maximum velocity measurements for the other samples of AGN are found in Table <ref>.We also find weak winds in AGN at z ∼ 0.5 upon reanalysis of <cit.> data. The wind centroid velocity for this sampleis -93^+248_-209 km s^-1 and its Doppler dispersion parameter is 190^+166_-130 km s^-1. <cit.> have measured the Fe2 λ 2586 centroid velocities and velocity widths for four of their AGN. Their measurements for both these two quantities range roughly between 130 to 330 km s^-1. Averaging their four measurements with inverse-variance weighting gives, a centroid velocity shift of -180 ± 9 km s^-1 and a Doppler parameter of 274 ± 13 km s^-1. The total equivalent width of Fe2 in this sample is 2.0^+0.5_-0.5Å and the equivalent width due to the wind component is 0.6^+0.7_-0.4Å . The average wind equivalent width measured by <cit.> is 1.2Å. lccccccc 0inWind model parameter fits to Fe2 λ2586 profiles.Model AGN z ∼ 1 NLAGN z ∼ 1 NLAGN z ∼ 1 SF z ∼ 1 AGN z ∼ 0.5a AGN z ∼ 0.5-1.5b AGN z ∼ 0.5-1.5c Param. (N = 12) (N = 9) (N = 7) (N = 12) (N = 6) (N = 13) (N=18)dv_w (km s^-1)-87^+173_-164-109^+111_-129-143^+109_-98 -74^+167_-184 -93^+248_-209-119^+144_-135-98^+153_-180 b_w (km s^-1) 197^+161_-124 160^+142_-98 134^+145_-84 212^+154_-131 190^+166_-130 131^+175_-84171^+178_-116 C_w0.2^+0.4_-0.2 0.3^+0.4_-0.20.3^+0.4_-0.20.3^+0.4_-0.20.4^+0.2_-0.1 0.4^+0.4_-0.30.3^+0.4_-0.2log N_w (cm^-2) 14.7^+1.3_-0.514.9^+1.4_-0.614.9^+1.5_-0.614.9^+1.3_-0.6 15.2^+1.3_-0.615.0^+1.4_-0.7 14.9^+1.4_-0.6b_g (km s^-1) 240^+78_-66 228^+74_-61 238^+98_-83 187^+81_-89 137^+189_-97 162^+135_-115158^+113_-112log N_g (cm^-2)14.5 ^+0.1_-0.2 14.6^+0.1_-0.2 14.5^+0.1_-0.2 14.7^+0.1_-0.214.7^+0.9_-0.314.6^+0.2_-0.314.6^+0.2_-0.2aRenalysis of <cit.> data bA joint renalysis of <cit.> data with our 7 narrow-line AGN with highly reliable AGN identification. cA joint renalysis of <cit.> data with all AGN including 3 broad-line AGN and 2 with less reliable AGN identification. dN denotes the number of galaxies in the stacked spectra. The parameters of the two component wind model are:the velocity centroid shift of the wind, v_w, the Doppler broadening parameter of the wind, b_w, the covering fraction of the wind, C_w, column density of the wind, N_w, the Doppler parameter of the ISM in the galaxy, b_g, and the column density of the gas in the galaxy, N_g. The median, the 84th and 16th percentile deviations of the PDFs of the model parameters are given in the table. In our notation, X^+Y_-Z denotes that X is the median, X+Y is the 84th percentile and X-Z is the 16th percentile.lccccccc 0inWind equivalent width (EW) and maximum wind velocity derived from Fe2 λ2586 model profiles.Derived AGN z ∼ 1 NLAGN z ∼ 1 NLAGN z ∼ 1 SF z ∼ 1 AGN z ∼ 0.5 AGN z ∼ 0.5-1.5 AGN z ∼ 0.5-1.5 Quantity (N = 12) (N = 9) (N = 7) (N = 12) (N = 6) (N = 13) (N=18)Wind EW (Å) 0.4^+0.4_-0.30.5^+0.6_-0.30.5^+0.5_-0.3 0.5^+0.7_-0.3 0.6^+0.7_-0.40.6^+0.5_-0.40.5^+0.5_-0.3 ISM EW (Å)1.2^+0.3_-0.4 1.3^+0.4_-0.5 1.3^+0.4_-0.5 1.4^+0.5_-0.5 1.4^+0.6_-0.5 1.1^+0.3_-0.51.2^+0.3_-0.4 Total EW (Å) 1.6^+0.2_-0.3 1.8^+0.2_-0.31.8^+0.2_-0.31.9^+0.4_-0.32.0^+0.5_-0.5 1.7^+0.3_-0.31.7^+0.2_-0.3Max. Velocity (km s^-1) -234^+153_-194-236^+114_-107-242^+108_-124-228^+177_-227 -204^+214_-266-202^+112_-166 -203^+140_-243The median, the 84th and 16th percentile deviations of the PDFs of the derived quantities from the wind model are given in the table. In our notation, X^+Y_-Z denotes that X is the median, X+Y is the 84th percentile and X-Z is the 16th percentile. § DISCUSSIONA key physical manifestation of active galactic nuclei (AGN) feedback is predicted to be powerful galactic winds. However, the relative roles between AGN activity and star formation in driving such winds remain largely unexplored at redshifts z ∼ 1, near the peak of cosmic activity for both. We study winds in 12 X-ray AGN host galaxies at z ∼ 1 in the CANDELS fields using deep Keck rest-frame UV spectroscopy. We find that winds in the AGN are similar to those found in star-formation-driven winds, and are too weak to escape and expel substantial cool gas from their host galaxies.Despite theoretical appeal, confirming evidence of star-formation quenching by powerful winds in AGN remains elusive. Here, we discuss some of the evidence reported in the literature, with emphasis on those with larger samples when multiple, similar studies exist. Our aim is to show that most wind studies using cold, warm or molecular gas are either in agreement or consistent with our finding that low luminosity AGN (L_X∼ 10^42 erg s^-1) have similar winds as those from star-forming galaxies of similar galaxy properties, especially if the comparison is made at similar AGN luminosities as those of our sample. §.§ Most previous cold gas absorption studies also found low wind velocities Neutral gas outflows in low redshift z ∼ 0.1 AGN have been extensively studied using Na1 <cit.>. The clearest result from these studies is that the wind velocities of narrow-line (type 2) AGN are similar to the wind velocities of starburst and star-forming galaxies. <cit.> studied outflows in 35 infrared-faint (i.e., low star-forming) Seyferts in an effort to disentangle the starburst effects on the winds from the AGN effects. The authors compared the outflow properties of these Seyferts with that of infrared-bright composite Seyferts in which both starbursts and AGN co-exist. The wind detection rates for the infrared-faint Seyfert 1s (6%) and Seyfert 2s (18%) are lower than previously reported for infrared-luminous Seyfert 1s (50%) and Seyfert 2s (45%). In addition, the outflow velocities of both high and low SFR Seyfert 2s are similar to those of starburst galaxies, while the outflow velocity in only one out of eighteen Seyfert1sis significantly higher. The measured average wind velocity for infrared-faint Seyferts 2 galaxies (v =-137±8 km s^-1, b = 250 ± 214 km s^-1) and the authors' conclusion that AGN do not play a significant role in driving the outflows in most local infrared-faint and infrared-bright Seyferts 2s is consistent with our result. The particular Seyfert1 with the strong wind has an average wind velocity -600 km s^-1 and very small velocity dispersion (b = 21 ± 6 km s^-1). It is likely that this object's high velocity measurement is affected by emission infill at systemic velocity. Likewise, <cit.> studied a sample of 26 Seyfert ULIRGs using Na1. They found no significant differences between the velocities of Seyfert 2s which are ultra-Infrared galaxies (ULIRGs) (v =-456^+330_-191 km s^-1, b = 232^+244_-119 km s^-1) and starbursts of comparable infrared luminosity (v =-408^+224_-191 km s^-1, b = 232^+244_-119 km s^-1).They also found very high velocities (∼ 5000 km s^-1) in two Seyfert 1 AGNand argued that they are likely small-scale (∼ 10 pc) disk winds. They stressed that large-scale, lower velocity outflows certainly exist in Seyfert 1 ULIRGs, since such winds are common in general infrared bright galaxies, but the wind signatures are likely rendered unobservable by the intense nuclear radiation in Seyfert 1s due to infilling of the absorption profile by scattered emission or due to ionization to higher states of the absorbing atoms.Recently, <cit.> studied a sample of 456 nearby galaxies of which 103 exhibit compact radio emission indicating radio AGN activity. They found that only 23 objects (5%) out of their entire sample exhibited outflow signatures in Na1. Not even a single object showed evidence of AGN activity in radio and of cold-gas outflow simultaneously. Radio-AGN activity was found predominantly in early-type galaxies, while cold-gas outflows were mainly observed in late-type galaxies with central star formation or with composite galaxies of star formation and AGN activities. The authors emphasized that their work supports a picture in which the onset of AGN activity appears to lag behind the peak of starburst activity <cit.>, and in which the gas reservoir has been significantly depleted by star-formation or stellar feedback before the AGN had a chance to couple to it. Similarly, <cit.> found Na1 outflow velocities of ∼ 100 km s^-1 in fading post-starburst galaxies with low-level nuclear activity at 0.1 < z < 0.5. Within a similar redshift range,<cit.> also found low velocity winds (∼ 200 km s^-1) in 13 post-starbursts galaxies by using Mg2 and Fe2 absorption lines. This result is in addition to the low-velocity outflows they found in low-luminosity AGN. In contrast, <cit.> observed high-velocity winds (with median v ∼ 1100 km s^-1) in massive transitional post-starburst galaxies and concluded that AGN likely played a major role in the abrupt truncation of star-formation in these systems. But, in subsequent works, they argued that these fast outflows are most likely driven by feedback from extremely-compact, obscured, star formation rather than AGN <cit.>. But, it remains possible that the outflows were driven by AGN activity that has been recently switched off or are driven by extremely obscured AGN. <cit.> found low-luminosity AGN in half of their post-starburst sample. Nevertheless, it should be noted that the authors concluded that the fast outflows are most likely driven by feedback from star formation rather than AGN. Generally, AGN are known to be common among post-starburst galaxies but are not directly linked to quenching starbursts <cit.>.To summarize the discussion so far, to our knowledge, all AGN-host galactic-wind studies using (near-)UV or optical absorption lines, with the exception of <cit.>, mentioned in the introduction, found moderate wind velocities in AGN which are similar to those from star-forming galaxies. Attributing emission infill for the difference with <cit.>'s absorption profiles, <cit.> concluded that their finding, namely, AGN host galaxies at z ∼ 0.2-0.5 do not have significantly faster winds than star-forming galaxies at similar redshifts, was not strongly at odds with results from lower and higher redshifts. Our work and those discussed above affirm this conclusion. §.§ Some previous ionized gas emission-line studies found high wind velocities and some did not Next, we discuss AGN winds detected in emission lines of ionized gas. Emission lines as wind diagnostics are much more difficult to interpret compared to absorption lines. For instance, to infer the mean velocity of the wind from emission lines, detailed understanding of the geometry of the wind, velocity distribution of the gas, and dust extinction in the host galaxy is needed. In a spherically-symmetric optically-thin outflow, the emission-line profile is symmetric and peaks at the systemic velocity of the galaxy (i.e, zero velocity). In contrast, in absorption lines which are minimally affected by resonant emission, the wind velocity profile is significantly offset from the systemic velocity. Regardless of how the emission line observations are interpreted, the result that low luminosity AGN at redshifts z ∼ 0.2-1 do not have significantly faster winds (in absorption) than star-forming galaxies is consistent with the wind speeds inferred from emission lines in AGN of comparable luminosities.<cit.> have explored the multiphase structure of galactic winds in six local ULIRGs using deep integral-field spectroscopy. Three of the ULIRGs host obscured quasars. Despite its small sample size, this work is unique by studying winds in the same objects in both emission and absorption, and it serves as a benchmark for interpreting myriads of emission-line only wind studies. Both the neutral and ionized gas of the six ULIRGs were studied by fitting the Na1 absorption line and multiple Gaussian components to strong, nebular, emission lines ([O1], Hα and [N2]). In all systems, high-velocity, collimated, multiphase kiloparsec-scale outflows were reported and the neutral phase dominates the mass outflow rate. The spatially-averaged, mean, wind velocities were found to be similar (v ∼ 200-400 km s^-1) in AGN and non-AGN, both for cold, neutral and warm, ionized gas. While the maximum wind velocities reach ∼ 1000 km s^-1 in neutral gas for both AGN and non-AGN, the highest gas velocities (2000-3000km s^-1) were only observed in ionized gas in the obscured quasars. Several spatially-resolved, spectroscopic studies at both low and high redshifts have shown that broad, ionized emissions are common in luminous AGN<cit.>. For example, <cit.> studied ionized gas kinematics in a representative sample of 89 X-ray AGN using [O3] at z =1.1-1.7 or Hα emission atz=0.6-1.1. The authors found high-velocity emission-line features in about half of the targets studied using [O3] . The velocity-width containing 80 % (W_80) of the [O3] line flux was found to mildly correlate with X-ray luminosity. For a Gaussian velocity distribution, W_80=2.56 σ, where σ is the velocity dispersion. <cit.> found W_80∼ 1.3-1.6v_0 for their wind models, where v_0 is the initial velocity of the wind. <cit.> modelled the observed extended emission lines of their quasars as ensembles of narrow-line-emitting clouds embedded in the wind. Their model relates the observed projected surface brightness of the [O3] emission line to the unobserved three dimensional outflow velocity profile assuming power-law luminosity density and velocity distributions, which depend on the three-dimensional radius vector of the outflow. They consider both a spherically symmetric outflow and a biconical outflow with or without the effect of dust extinction from the host galaxy. <cit.> found that 70% of higher-luminosity AGN (L_X > 6 × 10^43 erg s^-1) have line widths of W_80 > 600 km s^-1, while only 30% of the lower-luminosity AGN have velocity widths above 600 km s^-1. If we use the trend in <cit.>, the 9 low-luminosity AGN studied in this work would have W_80∼ 300 km s^-1 based on their X-ray luminosity. This value is roughly consistent with the velocity estimated using the Fe2 absorption line. AGN are known to exhibit jet-ISM interactions that accelerate gas to high velocities <cit.>. Based on their current sample, <cit.> could not conclusively determine whether the radio luminosity or the X-ray luminosity is more fundamental in driving the highest-velocity outflows. They found marginal evidence that a higher fraction of the radio-luminous AGN have W_80 > 600 km s^-1 compared to the non-radio-luminous AGN. Thus, high-velocity outflows may be due to small-scale, compact radio jets instead of radiation from the quasar <cit.>. Spatially-resolved studies have observed very-broadened [O3] emission-line that are co-spatial with kiloparsec scale jets <cit.>. At z ∼ 0.4, using a large sample from SDSS, <cit.> have shown that the highest velocity outflows are better linked to the mechanical radio luminosity of the AGN rather than to the radiative luminosity of the AGN. Alternatively, <cit.> have proposed that radio emission in radio-quiet quasars could be due to relativistic particles accelerated in the shocks initiated by the quasar-driven outflow. The authors also found that the velocity width of [O3] is positively correlated with mid-infrared luminosity, suggesting that outflows are linked to the radiative output of the quasar (i.e., are ultimately radiation-driven.)Furthermore, <cit.> performed a detailed analysis of the extended ionized gas around 31 low-redshift QSOs and found only 3 QSOs have outflows with velocities greater than 400 km s^-1. In all three cases, they found a radio jet that is most likely driving the outflows, and they argued that jet-cloud interactions are the most likely cause of disturbances in the kinematics of the quasars. <cit.> have argued that disagreements between their work and the aforementioned previous works, that claimed high velocity outflows in luminous AGN, are likely due to the effects of beam smearing of unresolved emission lines caused by seeing <cit.>. They reanalyzed the unobscured QSO sample of <cit.> and found that the widths of [O3] lines on kiloparsec scales are significantly narrower after PSF deblending. The estimated kinetic power of the outflow is reduced by two orders of magnitude (< 0.1% of the quasar bolometric luminosity) after the correction. Thus, the feedback efficiency is smaller than required by some numerical simulations of AGN feedback. As the authors pointed out, the majority of previous works have not carefully taken into account the effects of beam smearing. The incidence and energetics of large-scale AGN-driven outflows still remain an unsolved issue, especially in spatially unresolved observations of ionized gas outflows beyond the local universe. §.§ Some previous molecular gas studies found high wind velocities. Molecular outflows have been reported in several AGN both in absorption and emission <cit.>. <cit.> studied molecular OH 119μm outflows in a sample of 43, z < 0.3, galaxy mergers, which are mostly ULIRGs and QSOs.The OH 119μm feature is observed in emission, absorption, or both depending on the AGN strength. The OH emission is stronger relative to OH absorption in quasar-dominated systems and the feature is seen in pure emission in the most luminous quasars. The authors found that the median outflow velocities are typically ∼ 200 km s^-1 but the maximum velocities may reach ∼ 1000 km s^-1 in some objects. For even the most AGN-dominated systems with pure OH emissions, the emission line widths and shifts are ∼ 200 km s^-1. The authors also reported that the absorption line centroids are distinctly more blue-shifted among systems with large AGN fractions and luminosities. It not clear how much this trend is due to emission infill of the absorption profile.A recent X-ray observation of a mildly relativistic accretion disk wind in a local Seyfet 1 ULIRG, which also shows high velocity molecular OH 119μmoutflow have been hailed as providing direct connections between large-scale molecular outflows and the small-scale, AGN accretion wind in ULIRGs <cit.>. A review of the powerful and highly ionized accretion winds observed in X-ray spectra of luminous AGN can be found in <cit.>.<cit.> have studied CO emission in 19 local ULIRGs and quasars hosts. They found that starburst-dominated galaxies can have an outflow rate which are ∼ 2-4 times their star formation rates and the presence of AGN may enhance the outflow rates by a large factor depending on its luminosity.The maximum velocities reach up to ∼ 750 km s^-1. The authors estimated that the outflow kinetic power for galaxies with the most powerful AGN is about 5% of the AGN luminosity,as expected from some numerical models of AGN feedback. In contrast, recent, local studies of molecular gas in recently quenched or quenching post-starbursts surprisingly found that these galaxies have large molecular gas reservoirs comparable to star-forming galaxies <cit.>. Therefore, they did not find evidence that the global gas reservoir is expelled by stellar winds or active galactic nuclei feedback. Similarly, at z ∼ 2, <cit.> observed that quasar halos are have abundant, cool gas which is sufficient to fuel the observed SFR for at least 1 Gyr. These authors note that the current AGN feedback models remove too much gas from galactic halos and, therefore, under-predict the gas observed within quasar halos at z ∼ 2.To summarize, most absorption-line studies found that the wind velocities in AGN are moderate (∼ 200-400) km s^-1 and are similar to velocities in star-formation-driven winds. Most high-velocity AGN winds reported to date are controversial and can be attributed to observational complications such as emission infill and PSF smearing or they may not be due to radiation from luminous AGN, or they may just be spatially-unresolved, small-scale wind confined to the vicinity of black holes.§.§ The feedback efficiency in the low-luminosity AGN at z ∼ 1 and in other AGN samples. The wind kinetic power can be parametrized in terms of the radiative luminosity of an AGNas Ė = ϵ_f L_AGN, whereϵ_f is the fraction of the radiative luminosity transferred to the wind. Popular AGN feedback models invoke feedback efficiency, ϵ_f ∼ 5% to reproduce the properties of massive galaxies <cit.>. Following the thin, partially-filled shell wind model of <cit.>, we estimate Ė from their equation given below, which includes a contribution from both bulk flow and turbulent energy.dE/dt= 1.4 × 10^41 erg s^-1(C_Ω/0.4C_f )( r/10 kpc) ×( N(H)/10^21cm^-2) ( v_w/200kms^-1) ×[ ( v_w/200kms^-1)^2+1.5( b/200km s^-1)^2] In the above estimate, we use the wind measurements in Table <ref> and assume a wind radius of r=3 kpc and global covering factor C_Ω=0.3 (which is reasonable for the average opening angle of winds in star-forming galaxies z ∼ 1 as estimated by <cit.>). To convert the Iron column density to Hydrogen column density (N(H), which is a lower limit), we adopt solar abundance ratio and a dust depletion factor of 0.1 and no ionization correction <cit.>. The sample bolometric luminosity of our AGN is estimated by multiplying the mean X-ray luminosity of our sample with bolometric correction of 20 <cit.>. With these assumptions, we find feedback efficiency lower limit estimate of ϵ_f ≳ 0.02% for our nine narrow-line AGN sample.Note that that the metal abundance ratio, dust depletion factor, and ionization correction are completely unconstrained by the current data.From the mass-metallicity relation at z ∼ 1 <cit.>, the mean oxygen gas abundance, 12 + log(O/H), of our sample ranges between 8.9 – 9.1 and the typical 1σ scatter in of the relation is ∼ 0.15 dex. Therefore, the gas phase metallicity of our sample is not very inconsistent with the assumed solar value of 8.7. In the local ISM of the Milky Way, the dust depletion factor for Iron is 0.005 – 0.1 <cit.>. The ionization correction for Fe2 is uncertain and there are not many measured values of it. In the Orion Nebula,the ionization correction is ∼ 0.08, and the column densities of N(Fe4) ∼ 0.87 × N(Fe3) and N(Fe2) ∼ 0.16 × N(Fe3) <cit.>. The N(Fe2)/N(Fe3) ratio ranges between 0.04 – 0.35 in 8 Galactic H II regions, including the Orion Nebula <cit.>. Since N(Fe4) is not measured in most of these H II regions, 0.04 – 0.35 is a crude range for the ionization correction for Fe2. Since we do not do the ionization correction in our lower-limit estimate of the column density, the true column density may be a factor of 3 – 10 higher than our limit, other things being equal.To put the present measurements in larger context, Figure <ref> shows AGN bolometric luminosity against the wind kinetic power for our sample and other AGN wind samples in the literature. A very heterogeneous data, covering a wide range in redshift z ∼ 0.1-2, are used in the figure. Our compilation include AGN which are known to show wind signatures in absorption <cit.>, in ionized gas emission <cit.> and in molecular gas <cit.>All the absorption line and molecular gas based kinetic wind power measurements are taken directly from values published in the literature, while all the emission-line based the wind power measurements except those taken from <cit.> are based on our calculations. Where the wind power measurements for the ionized gas are not available, we estimate them following the standard methods and assumptions <cit.>. Using the equation below, the wind power for all ionized gas measurements are estimated from nebular emission lines assuming a wind radius of 3 kpc, electron density of 100 cm^-3 and the velocity-width is 1.3 times the initial wind velocity, W_80=1.3 v_0 (for a spherically symmetric and constant velocity wind).dE/dt∼ 1/2 M_ gas v_0^2/τ∼ 1/2 M_ gas v_0^3/r ∼6 × 10^44erg s^-1(M_ gas/2.8 × 10^9 M_⊙) ×(W_80/1300km s^-1)^3(3 kpc/r) WhereM_ gas/2.8 × 10^9M_⊙= L_ Hβ/10^43erg s^-1n_ e/100 cm^-3and L_ Hβ =0.1 × L_ O III10/ OIII/Hβ orL_ Hβ= 0.35× L_ Hα2.86/ Hα/Hβ As discussed in <cit.>, a standard method of estimating the wind power for the ionized gas is to use Hydrogen recombination lines to estimate the mass of the emitting Hydrogen, but [O3] emission-line may be a better probe of the extended emission. When they are available, O3 velocity width and luminosity are used to estimate wind kinetic power, assuming [O3]/Hβ ratio of 10. Otherwise, Hydrogen lines are used. Due to unaccounted for dust extinction of emission lines and turbulent kinetic energy, the estimated wind kinetic energy for the ionized gas is a lower limit.The outflow rate in molecular gas is estimated assuming continuously filled spherical wind (Ṁ= 3vM/R, where M is the mass, v is velocity and R is the radius of the wind). This estimates is three times lower if shell-like geometry is assumed instead <cit.>. The molecular wind kinetic energy is estimated simply as1/2Ṁv^2. For 90% of the galaxies, the molecular wind mass is estimated from CO luminosity, assuming a conversion factors from CO to molecular Hydrogen, X_ CO, which is about one-fifth of the Milky Way value <cit.>. If the true X_ CO is higher than assumed, the current wind kinetic energy values underestimate the true values.For the consistency, we adjust the literature wind power measurements for the cool gas to our assumed wind radius of 3 kpc when the wind radius were not previously measured. When the bolometric luminosities are not provided in the previous works, they are estimated from literature X-ray luminosities using bolometric correction of 20.In summary, the existing data hint that the wind kinetic energy is correlated with the bolometric luminosity AGN. However, better future data, which can characterize well the geometry of the wind are need to constrain AGN feedback models. We caution that the systematic uncertainties in ϵ_f are substantial and the values ϵ_f inferred from Figure <ref> should be considered as lower limits. Because of the uncertain assumptions made and the heterogenous data used to estimate wind kinetic power, our aim is rather to qualitatively show that wind kinetic power increases with AGN bolometric luminosity. This trend is mainly driven by the correlation between the wind velocity and the AGN luminosity. We hope Figure <ref> shows how our study broadly fits into wind characteristics reported in previous studies of AGN, and perhaps explains our finding of low velocity winds in low luminosity AGN.§ SUMMARY AND CONCLUSIONWe study winds using the Fe2 λ 2586 absorption line in 12 AGN host galaxies at z ∼ 1.Nine of these galaxies significantly deviate from the relationship between star-formation and X-ray luminosity and one of them has strong Ne5 emission. We find that the probability distribution function (PDF) of the centroid velocity shift in AGN has a median, 16th and 84th percentilesof -87 km s^-1, -251 km s^-1 and +86 km s^-1 respectively. The PDF of the velocity dispersion in AGN has a median, 84th and 16th percentileof 139 km s^-1, 253 km s^-1 and 52 km s^-1 respectively. The centroid velocity and the velocity dispersions are obtained from a two component (ISM+wind) absorption line model. The wind velocities in these AGN are significantly lower than their escape velocities. Thus, the bulk of their gas likely remains bound. The equivalent width PDF of the outflow in AGN has a median, 84th and 16th percentiles of 0.4Å, 0.8Å, and 0.1Å respectively. There is a strong ISM component in Fe2 λ 2586 absorption (has a PDF with a median, 84th and 16th percentiles of 1.2Å, 1.5Å, and 0.8Å), implying substantial amount cold gas is present in the AGN host galaxies. For comparison, star-forming and X-ray undetected galaxies at a similar redshift, matched roughly in stellar mass and galaxy inclination, have a centroid velocity PDF with a median, 84th and 16 percentiles of -74 km s^-1, -258 and +90 km s^-1, and a velocity dispersion PDF with a median, 84th and 16th percentiles of 150 km s^-1, 259 km s^-1 and 57 km s^-1 respectively. The equivalent width PDF of the outflow in the comparison sample hasa median, 84th and 16th percentiles of 0.5Å, 1.2Å, and 0.2Å. We have reanalyzed the sample of 6 low-luminosity AGN at z ∼ 0.5 from <cit.>. Our result is consistent with the wind velocities previously reported in these and other lower-redshift low-luminosity AGN. We conclude that the wind-mode AGN feedback is insignificant in low-luminosity AGN hosts. Future, large-sample-size and high signal-to-noise studies of winds in AGN and in a well-matched control sample of non-AGN are needed to significantly advance our knowledge from existing small sample absorption line studies, and to enable detailed modeling of winds which will potentially uncover subtle differences between the winds in AGN and in their control sample. We are very thankful to the anonymous referee for his/her valuable comments and suggestions that have significantly improved the clarity and content of the paper. We also are grateful to Connie Rockosi and Evan Kirby for doing some of the observations to gather the data used in this work. We thank Dale Kocevski for providing us his X-ray catalogs.This work has made use of the Rainbow Cosmological Surveys Database, which is operated by the Universidad Complutense de Madrid (UCM), partnered with the University of California Observatories at Santa Cruz (UCO/Lick,UCSC). We thank Guillermo Barro for his invaluable help with the Rainbow database. H. Yesuf would like to acknowledge support from NSF grants AST-0808133 and AST-1615730 and STScI grant HST-AR 12822.03-A.The data presented herein were obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Mauna Kea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain.§ APPENDIXIn this section we give ancillary information to support the results presented in the main sections of the paper. §.§ NUV composite spectra of AGN and the comparison sample Similar to Figure <ref>, Figure <ref> shows the near UV composite spectra around Fe2 λ 2586 for AGN subsamples (N=12 or N=9) and the comparison sample of X-ray undetected galaxies at z ∼ 1. §.§ Re-analysis with simple errors of inverse-variance-weighting (no bootstrapping) The errors of each individual spectrum are outputs by DEEP2 spec1d pipeline. If σ_ij is the error of photon count, C_i(λ_j), at wavelength pixel λ_j of a galaxy i, averaging over all N galaxies, the inverse-variance-weighted mean count at pixel λ_j is Ĉ(λ_j) = ∑_i=1^NC_i(λ_j)/σ_ij^2/∑_i=1^N1/σ_ij^2. The standard error of the mean count at pixel λ_jis , σ̂_̂ĵ = √(1/∑_i=1^N 1/σ_ij^2). The inverse-variance weighting analysis gives lower weights to galaxies having low signal-to-noise spectra in the sample and may not capture well the dispersion intrinsic to the sample. On the other hand, it has the advantage of down-weighting poorly measured data. In the case where the intrinsic sample dispersion is negligible, it may be preferred than bootstrapping scheme. In the latter case, poor data may be resampled instead of good data thereby increase the inferred dispersion. This section presents results of the reanalysis using σ̂_̂ĵ errors of the inverse-variance-weighting as shown in Table <ref>, Table <ref>, Figure <ref> & <ref>. In the latter figure, we plot theposterior PDFs of all the six wind model parameters for the AGN and the comparison samples. Note that the some of the parameters strongly correlate with each others and their PDFs are very asymmetric (non-Gaussian). For example, the covering fraction and the column density of the wind are degenerate with each other. Thus, the column density estimates are uncertain by more than factor of ten.lccccccc 0inModel parameter fits to Fe2 λ2586 profiles using errors ofinverse-variance-weighting.Model AGN z ∼ 1 NLAGN z ∼ 1 NLAGN z ∼ 1 SF z ∼ 1 AGN z ∼ 0.5a AGN z ∼ 0.5-1.5b AGN z ∼ 0.5-1.5c Param. (N = 12) (N = 9) (N = 7) (N = 12) (N = 6) (N = 13) (N=18)dv_w (km s^-1)-137^+76_-87-152^+76_-76-194^+68_-41 -135^+75_-133 -169^+78_-92-139^+48_-87-132^+51_-96 b_w (km s^-1) 150^+91_-79 115^+71_-60 78^+60_-40 198^+90_-102 112^+67_-54 116^+67_-53135^+72_-61 C_w0.3^+0.4_-0.2 0.4^+0.3_-0.10.4^+0.3_-0.10.4^+0.4_-0.20.4^+0.2_-0.1 0.4^+0.3_-0.10.4^+0.3_-0.1log N_w (cm^-2) 14.8^+1.3_-0.515.1^+1.3_-0.715.2^+1.2_-0.714.9^+1.1_-0.4 15.2^+1.3_-0.615.4^+1.2_-0.7 15.2^+1.3_-0.6b_g (km s^-1)243^+45_-34 224^+48_-38 214^+56_-48 154^+52_-48 98^+64_-55 181^+67_-46179^+59_-45log N_g (cm^-2) 14.5 ^+0.1_-0.2 14.5^+0.1_-0.3 14.5^+0.1_-0.2 14.6^+0.1_-0.214.6^+0.4_-0.314.5^+0.1_-0.314.5^+0.1_-0.3aRenalysis of <cit.> data bA joint renalysis of <cit.> data with our 7 narrow-line AGN with highly reliable AGN identification. cA joint renalysis of <cit.> data with all AGN including 3 broad-line AGN and 2 with less reliable AGN identification. dN denotes the number of galaxies in the stacked spectra. The median, the 84th and 16th percentile deviations of the PDFs of parameters are given in the table. In our notation, X^+Y_-Z: X is the median, X+Y is the 84th percentile and X-Z is the 16th percentile. lccccccc 0inWind equivalent width (EW) and maximum wind velocity derived from Fe2 λ2586 model profiles.Derived AGN z ∼ 1 NLAGN z ∼ 1 NLAGN z ∼ 1 SF z ∼ 1 AGN z ∼ 0.5 AGN z ∼ 0.5-1.5 AGN z ∼ 0.5-1.5 Quantity (N = 12) (N = 9) (N = 7) (N = 12) (N = 6) (N = 13) (N=18)Wind EW (Å) 0.5^+0.5_-0.20.6^+0.7_-0.10.7^+0.4_-0.2 0.8^+0.6_-0.3 0.8^+0.5_-0.31.1^+0.5_-0.40.9^+0.5_-0.3 ISM EW(Å) 1.2^+0.2_-0.4 1.3^+0.3_-0.6 1.0^+0.2_-0.4 1.3^+0.3_-0.5 1.2^+0.2_-0.4 1.1^+0.3_-0.51.1^+0.3_-0.5 Total EW(Å) 1.6^+0.1_-0.1 1.8^+0.1_-0.11.7^+0.1_-0.11.9^+0.1_-0.11.9^+0.1_-0.1 1.9^+0.1_-0.11.9^+0.1_-0.1Max. Velocity (km s^-1)-341^+87_-104-318^+66_-77-303^+48_-57-432^+122_-116 -263^+75_-68-237^+54_-51 -250^+70_-64The median, the 84th and 16th percentile deviations of the PDFs of the derived quantities from the wind model are given in the table. §.§ Fitting O2 profile to estimate the escape velocity Figure <ref>a shows the result of fitting two Gaussians + a linear continuum model to the O2 λ 3726.03, λ3728.82 doublet to the mean AGN spectra using Levenberg-Marquardt least square minimization. In the model,the two Gaussians have the same width, and the centroid shifts and the amplitudes of the doublet are also free parameters of the fit. The two Gaussian model is convolved to match the DEIMOS instrumental resolution and rebinned to match the observed data. Figure <ref>b shows the corresponding fit for the star-forming comparison sample. The velocity dispersions of the fits are 131 ± 18 km s^-1 for SF galaxies and 122 ± 4 km s^-1 for AGN. The errors of the velocity dispersions are estimated by repeating the least square fitting procedure for all 1000 bootstrap spectra around O2 (similar to what is shown in Figure <ref>) and then taking the standard deviation of the 1000 velocity dispersions.The centroid shifts of the doublet are consistent with no shift from the the rest wavelengths of the doublet. §.§ The effect of ISM covering fractionIn fiducial wind model in the main text of the paper, we assumed that the ISM fully covers the stellar continuum emission. Table <ref> &  <ref> show that the results of the analyses using a covering fraction of 50%. Our main conclusion does not depend on the assumption of the covering fraction.lccccc0in Model parameter fits to Fe2 λ2586 profiles after adopting a covering fraction of 50% for the ISM component of the wind model. The error of the composite spectra are estimated by bootstap scheme.Wind Model All AGN at z ∼ 1 NL AGN at z ∼ 1 NL AGN at z ∼ 1 SF at z ∼ 1 AGN at z ∼ 0.5Parameters (N = 12) (N = 9) (N = 7) (N = 12) (N = 6)v_w (km s^-1)-77^+139_-168-99^+116_-142 -127^+114_-112-123^+230_-203 -93^+240_-204 b_w (km s^-1)206^+144_-120174^+142_-96152^+143_-90227^+148_-137 190^+167_-128 C_w0.2^+0.4_-0.2 0.3^+0.3_0.2 0.4^+0.4_-0.20.2^+0.4_-0.2 0.3^+0.4_-0.2 log N_w (cm ^-2) 14.9^+1.6_-0.6 14.8^+1.5_-0.615.0^+1.4_-0.614.9^+1.6_-0.6 15.0^+1.5_-0.7 b_g (km s^-1)218^+88_-76 200^+81_-69 210^+106_-106 101^+73_-36 128^+157_-63 log N_g (cm ^-2)14.9^+0.2_-0.3 15.0^+0.2_-0.415.0^+0.2_-0.4 15.5^+1.1_-0.4 15.3^+1.2_-0.6The median, the 84th and 16th percentile deviations of the PDFs of parameters from the median are given in the table. lccccc0in Model parameter fits to Fe2 λ2586 profiles after adopting a covering fraction of 50% for the ISM component of the wind model (no bootstapping).Wind Model All AGN at z ∼ 1 NL AGN at z ∼ 1 NL AGN at z ∼ 1 SF at z ∼ 1 AGN at z ∼ 0.5Parameters (N = 12) (N = 9) (N = 7) (N = 12) (N = 6)v_w (km s^-1) -121^+74_-102-148^+86_-86 -190^+80_-41-110^+93_-185 -169^+78_-74 b_w (km s^-1) 153^+84_-84114^+74_-6379^+62_-44197^+157_-131 120^+72_-56 C_w0.3^+0.4_-0.2 0.3^+0.3_0.1 0.4^+0.3_-0.10.3^+0.4_-0.2 0.4^+0.3_-0.2 log N_w (cm^-2)14.8^+1.2_-0.5 15.2^+1.3_-0.815.2^+1.4_-0.815.0^+1.4_-0.6 15.2^+1.4_-0.7 b_g (km s^-1)231^+49_-66 204^+51_-42 204^+60_-47 165^+113_-71 78^+48_-23 log N_g (cm^-2)14.9^+0.1_-0.3 15.0^+0.1_-0.414.9^+0.1_-0.5 14.6^+0.1_-0.2 15.7^+1.1_-0.8The median, the 84th and 16th percentile deviations of the PDFs of parameters from the median are given in the table.
http://arxiv.org/abs/1704.08348v1
{ "authors": [ "Hassen M. Yesuf", "David C. Koo", "S. M. Faber", "J. Xavier Prochaska", "Yicheng Guo", "F. S. Liu", "Emily C. Cunningham", "Alison L. Coil", "Puragra Guhathakurta" ], "categories": [ "astro-ph.GA" ], "primary_category": "astro-ph.GA", "published": "20170426205635", "title": "No evidence for feedback: Unexceptional Low-ionization winds in Host galaxies of Low Luminosity Active Galactic Nuclei at Redshift z ~1" }
Study of N = 16 shell closure within RMF+BCS approach Presented at the Zakopane Conference on Nuclear Physics “Extremes of the Nuclear Landscape”, Zakopane, Poland, August 28 – September 4, 2016 G. Saxena Department of Physics, Govt. Women Engineering College, Ajmer-305002, IndiaM. Kaushik Department of Physics, Shankara Institute of Technology, Jaipur-302028, IndiaDecember 30, 2023 ==================================================================================================================================================================================================== We have employed RMF+BCS (relativistic mean-field plus BCS) approach to study behaviour of N = 16 shell closure with the help of ground state properties of even-even nuclei. Our present investigations include single particle energies, deformations, separation energies as well as pairing energies etc. As per recent experiments showing neutron magicity at N = 16 for O isotopes, our results indicate a strong shell closure at N = 16 in ^22C and ^24O. A large gap is found in between neutron 2s_1/2 and 1d_3/2 states for ^22C and ^24O. These results are also supported by a sharp increase in two neutron shell gap, zero pairing energy contribution and with excellent agreement with available experimental data. Moreover, our calculations of N = 16 isotones are however found at variance for higher Z isotones like ^36Ca, where experiments show high lying first excited 2^+ state indicating shell closure at N = 16.21.10.-k, 21.10.Ft, 21.10.Dr, 21.10.Gv, 21.10.-n, 21.60.Jz § INTRODUCTIONEmergence of new shell closures and disappearance of conventional shell closures throughout the periodic chart have opened various theoretical and experimental treatments in understanding the behaviour of nuclei with neutron-to-proton ratio. It has also been established that shell structure influences the locations of the neutron and proton drip lines and the stability of matter. Appearance of new magic numbers N = 16 in the ^24O <cit.> and the emergence of an N = 32 sub-shell closure in ^52Ca <cit.> are some of the examples of changes in shell structure. In this paper we have investigated N = 16 shell closure with the use of Relativistic Mean Field plus BCS approach <cit.>.§ RELATIVISTIC MEAN-FIELD THEORYOur RMF calculations have been carried out using the model Lagrangian density with nonlinear terms both for the σ and ω mesons <cit.>. L=ψ̅ [γ^μ∂_μ - M]ψ + 1/2 ∂_μσ∂^μσ - 1/2m_σ^2σ^2-1/3g_2σ ^3 - 1/4g_3σ^4 -g_σψ̅σψ -1/4H_μνH^μν+ 1/2m_ω^2ω_μω^μ + 1/4 c_3 (ω_μω^μ)^2- g_ωψ̅γ^μψω_μ -1/4G_μν^aG^aμν + 1/2m_ρ^2ρ_μ^aρ^aμ- g_ρψ̅γ_μτ^aψρ^μ a -1/4F_μνF^μν - eψ̅γ_μ(1-τ_3)/2 A^μψwhere the field tensors H, G and F for the vector fields are defined by equation (<ref>)H_μν = ∂_μω_ν -∂_νω_μ G_μν^a = ∂_μρ_ν^a -∂_νρ_μ^a-2 g_ρ ϵ^abcρ_μ^bρ_ν^cF_μν = ∂_μ A_ν -∂_ν A_μ,and other symbols have their usual meaning. Based on the single-particle spectrum calculated by the RMF described above, we perform a state dependent BCS calculations and continuum is replaced by a set of positive energy states generated by enclosing the nucleus in a spherical box. For further details of these formulations we refer the readers to ref. <cit.>. § RESULTS AND DISCUSSION The results of single particle energy of N = 16 isotonic chain calculated using RMF with TMA force parameter <cit.> have been shown in Fig. 1. A large variation in the energies of states 2s_1/2, 1d_5/2 and 1d_3/2 is clearly seen moving from proton rich to neutron rich (right to left). It is evident from Fig 1 that moving towards proton deficient side 2s_1/2 state creates a substantial gap with 1d_3/2 state specially for Z = 6 and Z = 8 resulting development of new shell closure N = 16 in ^22C and ^24O. This gap is around 3.5 MeV and 3.3 MeV for ^22C and ^24O respectively as can be seen in figure. This kind of reorganization is also observed from the calculations with other parameters NL3 and PK1 (not shown here). It is gratifying to note here that our results are showing doubly magic character of ^24O as observed in recent experiments <cit.> and in addition the same shell closure N = 16 is also observed in ^22C. On the other side, for larger Z, 2s_1/2 and 1d_3/2 states are found with very small gap giving no sign for N = 16 shell closure. This result is not in accord with experimental investigations showing shell closure at N = 16 due to high lying 2^+ state for ^36Ca <cit.> along with ^30Si and ^32S <cit.>. Further investigations are required for consistent description of isotonic chain in terms of parameters, pairing and isospin. To get into more insight, we have plotted two neutron shell gap (S_2n (N, Z) - S_2n (N+2, Z)) in lower panel of Fig. 2, for C and O isotopes calculated by RMF+BCS approach using TMA force parameter <cit.> along with experimental shell gap for O isotopes <cit.>. One can observe abrupt increase in shell gap for conventional shell closure at N = 8. In the same way another rise in two neutron shell gap can be seen moving from N = 14 to N = 16 for both C and O isotopes. This rise which is in accord with experiments <cit.> supports occurrence of new spherical shell closure at N = 16 for ^22C and ^24O both. Further, in upper panel of Fig. 2, we have shown paring energy contribution for both C and O isotopes. For doubly magic nuclei pairing energy vanishes and indeed it vanishes for ^12C, ^14C, ^22C and ^14O, ^16O, ^24O for N = 6, 8 and 16 respectively. The result in upper panel of Fig. 2 again fortify shell closure at N = 16 for ^22C and ^24O and general validity of RMF approach.§ ACKNOWLEDGEMENTSAuthors are grateful to Prof. H. L. Yadav, BHU, India for his kind guidance and support. One of the authors (G. Saxena) gratefully acknowledges the support provided by SERB (DST), Govt. of India under the young scientist project YSS/2015/000952 and International Travel Grant. 50 robert Robert V. F. Janssens, Nature 459, 1069 (2009). kanungo R. Kanungo et al., Phys. Rev. Lett. 102, 152501 (2009). wien F. Wienholtz, et al., Nature 498, 346 (2013). saxena G. Saxena et al., Canadian Journal of Physics 92, 253 (2014). singh D. Singh et al., International Journal of Modern Physics E 21, 1250076 (2012). suga Y. Sugahara et al., Nucl. Phys. A 579, 557 (1994). door P. Doornenbal et al., Phys. Lett. B 647, 237 (2007). wang M. Wang et al., Chin. Pys. C 36, 1603 (2012). http://www.nndc.bnl.gov/
http://arxiv.org/abs/1704.08422v1
{ "authors": [ "G. Saxena", "M. Kaushik" ], "categories": [ "nucl-th" ], "primary_category": "nucl-th", "published": "20170427033832", "title": "Study of N = 16 shell closure within RMF+BCS approach" }
Department of Mathematics, Brunel University London, Uxbridge UB8 3PH, United Kingdom Institut de Physique de Nice, Université Côte d'Azur, CNRS, 06100 Nice, France In various situations where wave transport is preeminent, like in wireless communication, a strong established transmission is present in a complex scattering environment. We develop a novel approach to describe emerging fluctuations, which combines a transmitting channel and a chaotic background in a unified effective Hamiltonian. Modeling such a background by random matrix theory, we derive exact non-perturbative results for both transmission and reflection distributions at arbitrary absorption that is typically present in real systems. Remarkably, in such a complex scattering situation, the transport is governed by only two parameters: an absorption rate and the ratio of the so-called spreading width to the natural width of the transmission line. In particular, we find that the established transmission disappears sharply when this ratio exceeds unity. The approach exemplifies the role of the chaotic background in dephasing the deterministic scattering.05.45.Mt, 03.65.Nk, 05.60.Gg, 24.60.-k Fluctuations in an established transmission in the presence of a complex environment Fabrice Mortessagne December 30, 2023 ====================================================================================§ INTRODUCTIONIn many applications ranging from electronic mesoscopic or quantum devices <cit.> to telecommunication or wireless communication <cit.>, transmission and transport are the main focus of studies. Here, either the system or its excitation is designed to give a high transmission at the working energy (or frequency) . To guarantee the functionality of such devices, fluctuations in transmission induced by a complex environment are of crucial interest. Such variations might be introduced by uncertainties of the production process of electronic devices or by real-life changing environment like in wireless communication. In the latter case, e.g., one is interested in a stable communication that has strong transmission guaranteeing large signal to noise ratios as well as high data transfer rates <cit.>. This is often implemented via multiple-input multiple-output (MIMO) systems <cit.>, which control the excitations thus giving rise to a special basis, where an established transmission is induced at . In electronic devices like quantum dots or wells, similar established transmission is put forward in the design of the system and placement of the leads <cit.>.The analysis of complex quantum or wave systems often relies on predictions obtained for fully chaotic dynamics  <cit.>. Random matrix theory (RMT) has proved to be extremely successful in describing universal wave phenomena in such systems <cit.>. The canonical examples are spectral and wave function statistics, including their many experimental verifications <cit.>. When combined with the resonance scattering formalism <cit.>, RMT offers a powerful approach <cit.> to describe universal statistical fluctuations in scattering, see<cit.> for recent reviews. The approach is also flexible in incorporating real world effects, like finite absorption, providing the non-perturbative theory <cit.> to account for statistical properties of complex impedances <cit.>, transmission and reflection coefficients <cit.> observed in various microwave cavity experiments <cit.>. From the theoretical side, non-universal aspects related to a deterministic part in scattering are usually removed from the very beginning by means of a certain procedure <cit.>, yielding a new scattering matrix which becomes diagonal on average <cit.>. This, however, cannot be applied in the present case where fluctuations in a transmitting channel, characterised by the essentially non-diagonal deterministic S matrix, are the main point of interest.In this work, we propose a non-perturbative approach to quantify fluctuations in the established transmission mediated by a single energy level that is coupled to a complex environment modeled by RMT. The model in its original formulation goes back to nuclear physics <cit.>, giving rise to the well-known formalism of the strength function that has a rich history of various applications <cit.>. Nevertheless, a complete characterisation of fluctuations in the transmission for such a model in terms of its distribution function has not been reported in the literature so far and will be presented below.In the next section <ref>, we introduce the model in detail, see Fig. <ref> for an illustration.Section <ref> provides the derivation of the exact RMT expressions for the transmission distribution, which is the main goal of this paper, first for the ideal case of zero absorption and then in the general case of arbitrary absorption. To complete the description of scattering, we address the distribution of reflections in Sec. <ref>, and then summarize with the conclusion and outlook. Further details on numerical implementations as well as an extension to incorporate losses on the single level, responsible for the established transport, are given in two appendices. In all cases, a good agreement between the exact analytical results and numerical simulations with random matrices is found.§ SCATTERING SETUPThe scattering problem in question is schematically represented by Fig. <ref>. We consider a two-port setup, which is typical for many experimental realisations, and assume that an established transmission between two channels occurs through a single resonance characterised by energyand width .This resonance state, which is often referred to as a doorway state <cit.>, is coupled to a background Hamiltonian that represents influences of a complex environment.Due to a coupling to surrounding complicated states, the transmission channel spreads over the background with a rate given by the so-called spreading width .The competition between the two decay mechanism is naturally controlled by the ratio η = /between the spreading and escape widths.We restrict our consideration to systems invariant under time-reversal, which is the case most relevant experimentally. Following <cit.>, we can then model the chaotic background by a random matrix drawn from the Gaussian orthogonal ensemble (GOE).Additionally, we assume that the background states may have uniform broadening to account for possible homogeneous absorption in the environment. The exact RMT results for the distribution functions derived below are non-perturbative and valid at any η and arbitrary absorption. §.§ Effective Hamiltonian approach The scattering approach based on the effective non-Hermitian Hamiltonian <cit.> is well adapted to treat both transport and spectral characteristics on equal footing.Neglecting global phases due to potential scattering, the scattering matrix S can be written as follows S(E) = 1 - i A^T1/E -A, where A is the energy-independent matrix of the coupling amplitudes between the channel and internal states, and = H - i/2 AA^T defines the effective Hamiltonian of the open system. The Hermitian part H corresponds to the Hamiltonian of the closed system, whereas the anti-Hermitian part accounts for finite lifetimes of resonances (eigenvalues of ). The factorised structure of the latter ensures the unitarity of S (at real scattering energy E). For systems invariant under time-reversal, H and A can be chosen as real, thus S is a symmetric matrix.We begin with quantifying a stable transmission between two channels in a “clean” system and consider a single level without any chaotic background. This amounts to setting above H = and A=(ab), with two real parameters a and b determining the strength of coupling to the channels. The S matrix elements are then given by a multichannel Breit-Wigner formula <cit.> S(E) = 1 - i/E-+(i/2)(a^2 ab ab b^2 ), where the widthis given by the sum = a^2 + b^2 of the partial decay widths. The peak transmission is achieved at the scattering energy E=. The S matrix evaluated at this point can be parameterised as follows S_0= ( -r_0 t_0 t_0 r_0 ), with the reflection and transmission amplitudes being r_0 = a^2-b^2/ ,t_0 = -2ab/ . In view of Eq. (<ref>), they satisfy the flux conservation r_0^2+t_0^2=1. Thus, we can express the scattering observables in terms of the experimentally measurable quantities: , , and the transmission coefficient of the simple mode T_0 ≡ t_0^2 = 1 - r_0^2 . In order to incorporate a complex environment acting on the transmission state, we follow the spreading width model <cit.> and represent the Hamiltonian as follows H = ( V⃗^TV⃗ ). Here,stands for the background Hamiltonian modelled by a random GOE matrix of size N, whereas vector V⃗^T = (V_1, …, V_N) is responsible for coupling to the background. The elements of V⃗ can be chosen as real fixed or random mutually independent Gaussian variables with zero mean and a given second moment ⟨V^2⟩. By virtue of the invariance of the GOE with respect to basis rotations, the two choices become equivalent in the RMT limit of N≫1, with the obvious correspondenceV^2=1/NV⃗^2=⟨V^2⟩. For the sake of simplicity, we will assume V⃗ fixed in the following.Neglecting a direct coupling of the GOE states to the channels, the coupling matrix A takes the special form A^T = ([ a 0 ⋯ 0; b 0 ⋯ 0 ]). As a result, one can easily find using Schur's complement that the S matrix retains the same structure of Eq. (<ref>), where the following substitution is to be made 1/E- + (i/2)→1/E- + (i/2) - g(E) and the scalar function g(E) is defined by g(E)= V⃗^T1/E-V⃗. This is the so-called strength function <cit.>. By construction, it has the meaning of the local Green's function of the complex background <cit.>, characterising its spectral properties. When averaged over this fine energy structure, the scattering amplitudes acquire an extra damping = 2π V^2/Δ in addition to . The spreading width (<ref>) is simply Fermi's golden rule expressing the rate of decay into the “sea” of background states, the density of which is determined by the mean level spacing Δ <cit.>. §.§ S matrix fluctuations We are interested in fluctuations in scattering at the resonance energy . The S matrix at this point can be represented in the following convenient form S≡ S() = 1 - 1/1+iη K (1-S_0), where η is given by Eq. (<ref>) and we have introduced K ≡2/g() = Δ/π V^2V⃗^T1/-V⃗. This quantity plays the role of the Wigner reaction matrix associated with scattering on the background [cf. Eq. (<ref>) below]. Without loss of generality, we may letbe the centre of the semicirle law determining the mean density of the background states. With the definition Δ^-1=-(1/π)Im⟨tr(+i0-)^-1⟩, one readily finds the average value ⟨K⟩=-i, resulting in ⟨S⟩ = η/1+η+1/1+ηS_0 for the average S matrix. This expression shows that η controls the weight between the equilibrated and deterministic parts in scattering, which are given by the first and second terms in Eq. (<ref>), respectively.The average S matrix is clearly non-diagonal because of channel mixing induced by S_0. Usually, the starting point of RMT applications to scattering  <cit.> consists in eliminating such direct processes by means of a special similarity transform <cit.>. In the present case, however, this would remove the effect we are after. For similar reasons, one cannot use recent exact results <cit.> for the distribution of off-diagonal S matrix elements (derived assuming a diagonal ⟨S⟩). In contrast, the obtained representation (<ref>) enables us to solve the problem in its full generality by applying the non-perturbative theory for the local Green's function, K, developed by Fyodorov, Sommers and one of the present authors in <cit.>. §.§ Background as a source of dephasingIt is instructive to give a physical interpretation to the above results in terms of the interference between the two scattering phases, the constant one due to the direct transmission and the random one induced by the chaotic background.The deterministic part of the scattering matrix, S_0, can be brought to the diagonal form O_φ^TS_0O_φ=diag(-1,1) by an orthogonal matrix O_φ that corresponds to a rotation by the angle φ = arctant_0/1+r_0. This angle expresses the degree of channel non-orthogonality due to the non-diagonal A^TA. By construction, the same transformation diagonalizes the full S matrix (<ref>), yielding S = O_φdiag(-S_bg,1)O_φ^T, where S_bg stands for the background contribution S_bg = 1-iη K/1+iη K into the full scattering process. Expression (<ref>) is a usual form for the elastic (single-channel) scattering in open chaotic systems <cit.>, with η now playing the role of a degree of system openness. The resulting scattering pattern is therefore due to the interference between the deterministic phase φ and the random phase θ=arg(S_bg), the distribution of which is well-known <cit.>.The model exemplifies the chaotic background as a natural source of dephasing in scattering processes (see also the relevant discussion in <cit.>). In contrast to two other dephasing models <cit.>, our formulation is very flexible in accommodating physically relevant properties of complex environments. In particular, homogeneous losses can be easily taken into account by uniform broadeningof the background states. Operationally, such a damping is equivalent to the purely imaginary shift +(i/2) in the Green's function (<ref>) <cit.>. As a result, the latter becomes complex,K = u - iv,with the negative imaginary part, v>0 (the local density of states) <cit.>. The universal statistical properties of mutually correlated random variables u and v are solely determined by the (dimensionless) absorption rateγ = 2π/Δ.Their joint distribution function is known exactly <cit.> and will be applied below to study fluctuations of S. § TRANSMISSION DISTRIBUTIONIt is convenient to define the re-scaled transmission coefficient T = |S_12|^2/T_0, expressed in the units of the peak transmission in the “clean” system. We now derive the exact results for the transmission distribution function (T) = ⟨δ(T - |S_12|^2/T_0)⟩, first for the ideal case of the stable background (γ=0) and then for the general case of finite absorption. §.§ Stable chaotic background In the case of zero absorption, K=u is real so the transmission coefficient is found from Eq. (<ref>) as follows T = 1/1+η^2u^2. The random variable u is known <cit.> to have the standard Cauchy distribution. This stems from the fact that the scattering phase θ, see Eq. (<ref>), is distributed uniformly at special coupling η=1 <cit.>. The transmission distribution (<ref>) follows then by a straightforward integration: 𝒫_0(T)= ∫_-∞^∞du/π1/1+u^2δ(T - 1/1 + η^2u^2) =1/π√(T (1 - T))η/1 + (η^2 - 1)T, for 0≤T≤1. The corresponding cumulative distribution function is given by 𝒩_0(T) = 1 - 2/πarctan( 1/η√(1 - T/T)). Both functions are represented on Fig. <ref>The distribution (<ref>) has a bi-modal shapewith a square-root singularity at both edges, which is typical for transmission problems <cit.>. The mean value is readily found to be⟨T⟩ = (1+η)^-1, in agreement with the general result (<ref>), and the transmission variance reads ⟨T^2⟩ - ⟨T⟩^2 = η/2(1+η)^2. The background coupling, η, controls the weight of the distribution that is concentrated near T∼1 or T∼0 at small or large η, respectively. It becomes symmetric at η=1, with the variance attaining its maximum value. Further increase of η leads to a sharp redistribution towards low transmission. For applications, this sets the limit η=1 on coupling for reliable signal transmission.Figure <ref> illustrates the above discussion and results. In order to check the validity of the predictions we present them together with numerical data from RMT simulations based onrealizations ofblocks of size 200× 200.The parameters a, b, V, see Eqs. (<ref>)–(<ref>), are chosen to give T_0 = 0.8 and the various η values. (For simplicity, we took the elements of the constant vector V⃗ to be all equal.) The overall agreement is flawless. §.§ Background with absorptionWhen the absorption rate γ>0, the complex K is given by Eq. (<ref>), yielding the transmission coefficient T = 1/(1+η v)^2+η^2u^2. The random variables u and v>0 are mutually correlated and have the following joint distribution <cit.>: P(u,v) = 1/2π v^2(u^2+v^2+1/2v). Function (x), x=(u^2+v^2+1)/2v>1, is known exactly at any γ <cit.> and has the meaning of the distribution of reflection originated from the chaotic background. The parameter x represents the background reflection coefficient |S_bg|^2=x-1/x+1<1. Note that S_bg is subunitary at finite absorption, resulting in subunitary S as well.The derivation of the transmission distribution in this case proceeds as follows. In order to perform the integration over (<ref>), it is convenient to first choose the new integration variable y=η^2u^2. With the definitions (<ref>) and (<ref>), this results in (T)= 1/2πη T^2∫_0^∞dv/v^2∫_0^∞dy/√(y)(u^2 + v^2 + 1/2v) =×δ(y+(1 + η v)^2 - T^-1). The y integration is removed by the δ function, which restricts the remaining integration over v to the domain T^-1-(1+η v)^2 =η^2(v_--v)(v_++v)>0, with v_± = 1/η1±√(T)/√(T). As a result, we arrive at the following expression (T) = 1/2πη^2 T^2∫_0^v_-dv/v^2(1+ξ^2/2v-1/η) /√((v_--v)(v_++v)), where the shorthand ξ^2≡ v_+v_-=1/η^21-T/T has been introduced. It is now useful to choose p =v_-/v-1 as a new integration variable, yielding (T)= 1/2πη^2T^2v_-ξ∫_0^∞dp (1 + p)/√(p[p + 2/(1+√(T))])=×(1 + v_-^2+p(1 + ξ^2)/2v_-) With an explicit formula forfound in Ref. <cit.>, representation (<ref>) solves the problem exactly at arbitrary γ and constitutes one of the main results of the paper.Further analytical progress is possible in the physically interesting cases of weak and strong absorption, since the functionsimplifies to the following limiting forms <cit.>: (x) ≈{[ 2/√(π)(γ/4)^3/2√(x+1) e^-γ/4(x+1),γ≪1;γ/4e^-γ/4(x-1),γ≫1 ]. . For weak absorption, γ≪1, a close inspection of Eq. (<ref>) shows that the dominant contribution to the integral comes from large p∼1/γ≫1. In the leading order, one can neglect p-independent terms in the integration measure that thereby becomes “flat”. The integration can then be performed making use of the limiting expression forat small γ stated above. This results in the following leading-order correction 𝒫_γ≪1(T)≈𝒫_0(T) exp[-γ(1+(η-1)√(T))^2/8η√(T)(1-√(T))] to the zero-absorption distribution (<ref>), 𝒫_0(T).Therefore, finite absorption modifies a typical bimodal shape of the transmission distribution by inducing exponential cutoffs at both edges T→0 and T→1.In the opposite case of strong absorption, γ≫1, the integral (<ref>) is dominated by small p∼1/γ≪1. Performing a similar analysis as above but with the large-γ form ofleads to the following approximation 𝒫_γ≫1(T)≈√(γη)(1+√(T))/4√(π)(1-T)T^3/4√(1+(η^2-1)T)=×exp[-γ(1-(η+1)√(T))^2/8η√(T)(1-√(T))]. This expression features the same exponential cutoffs at the edges, but the bulk of the distribution gets more distorted as compared to the weak absorption limit <cit.>.Particularly interesting is the case of the “critical” coupling η=1, when the transmission distribution (<ref>) at zero absorption is symmetric with respect to T→(1-T). By comparing expressions (<ref>) and (<ref>), we see that such a symmetry is largely retained at weak absorption and severely violated at strong absorption, when high transmission becomes heavily suppressed.At arbitrary values of γ, functionis given by a fairly complicated expression and the transmission distribution (<ref>) needs to be studied numerically, see Appendix <ref> for further discussion. The corresponding results are represented on Fig. <ref> for three different values of the absorption rate γ = 0.1, 1, and 5.The aforementioned bi-modal form of distributionvanishes with increasing absorption. For the weakly coupled background, η≪1, one observes first the diminishing of high transmission peak at T ∼ 1, and then the eventual depletion of the peak at T∼0. The situation changes at moderately strong coupling η≳1, when large transmissions become fully suppressed. Figure <ref> illustrates such a behaviour, showing also the results of numerical simulations which match the analytical prediction (<ref>) perfectly.Note that the absorption in the numerical RMT calculations is realized by adding a constant imaginary part to the energy levels of the background. We have also checked the agreement by modelling absorption using fictitious channels as discussed in <cit.>. In this case, one has to rescale the parameters a and b in order to avoid introducing losses on the established channel. The corresponding rescaling is presented in Appendix <ref>. § REFLECTION DISTRIBUTIONS The fluctuations in reflection can be studied in a similar way as for the transmission. In the case of vanishing absorption, γ = 0, the reflection distribution can actually be related to Eq. (<ref>). Due to the unitarity of S in this case, we have |S_11|^2 = 1-|S_12|^2=|S_22|^2. Therefore, the distribution of the reflection coefficient R_c=|S_cc|^2 (c=1,2) is determined by Eq. (<ref>) according to ^(refl)(R_c) = 1/T_0(1 - R_c/T_0)in the region 1-T_0≤ R_c≤1, being zero otherwise.In the case of finite absorption, transmission and reflection coefficients are no longer related by flux conservation. The reflection coefficients are readily found from Eqs. (<ref>) and (<ref>) in the explicit form R_1,2 = (η v∓ r_0)^2+η^2u^2/(η v+1)^2+η^2u^2, where the upper (lower) sign stands for R_1 (R_2). One sees that the reflection coefficients are generally different at nonzero r_0. This is a manifestation of the interference between the equilibrated and direct reflection induced by S_0 (see the discussion in Sec. <ref>).Following similar steps as leading to Eq. (<ref>), we arrive at the following representation for the distribution of the reflection coefficient R_1 at γ > 0:(R_1)= (1+r_0)/2πη^2(1 - R_1)^2∫_0^∞dp (1 - r_0 + 2 η w(p))/√(p) (ŵ_+ + p w_-)^2=×(w_+ + w_-)/√(y(p))(1 + w(p)^2 + py(p)/2w(p)).Here, we have introduced the following shorthand notations: ŵ_± = 1/ηr_0±√(R_1)/1 ∓√(R_1), w_± = max{0, ŵ_±}, w(p) = (ŵ_+ + p w_-)/(p + 1) and y(p) = (ŵ_+ - w_-)[(ŵ_+ - ŵ_-) - p(ŵ_- - w_-)]/(p + 1)^2. The distribution of R_2 is given by the same formula (<ref>) under the replacement r_0→-r_0 everywhere there.Due to the r_0-dependence of expression (<ref>), it follows that there is a particular difference in the distributions of the two reflection coefficients. Without loss of generality, one can choose r_0<0 corresponding to the unequal channel couplings, |b|>|a|, see Eq. (<ref>). Then we havew_+ + w_- = { 0,R_1 < r_0^2ŵ_+,R_1 ≥ r_0^2 .for the reflection coefficient R_1 andw_+ + w_- = {ŵ_+, R_2 < r_0^2ŵ_+ + ŵ_-,R_2 ≥ r_0^2 .for R_2. This implies that (R_1) = 0 identically at R_1 ≤ r_0^2 = 1-T_0, leading to the same gap for small reflections as seen from the case without absorption. The distribution of the other reflection coefficient R_2 (i.e., in the channel with stronger coupling) does not have such a gap since expression (<ref>) is nonzero for all R_2.In Fig. <ref> we present a comparison of the analytical result (<ref>) with numerical data, using the same choice of the parameters as for the transmission distribution before. In the general case of finite backscattering, r_0≠ 0, the reflection distributions in two channels show a distinctly different behavior as discussed above. We have chosen the value of r_0=-√(0.2), corresponding to the established transmission T_0 = 0.8. Therefore, the behavior of the reflection coefficient R_2 changes below 1 - T_0 = 0.2 where the gap in the distribution vanishes. The overall agreement of the theory with numerics is flawless. § CONCLUSIONS AND OUTLOOK In this work, we have formulated an approach to characterise fluctuations in an established transmission that are induced by a chaotic background. Our method is based on the strength function formalism, adopted from and developed in nuclear physics, providing new insights for the applications of the latter in a broader context of wave chaotic systems. The strength of coupling to the background is controlled by the single parameter (<ref>), the ratio of the spreading to escape width. Using RMT to model the chaotic background, we have derived the transmission distribution in an exact form valid at arbitrary uniform absorption in the background. The analytical results are supported by extensive numerics performed by Monte-Carlo simulations with random matrices.The distribution has a bimodal shape, with two peaks at low and high transmission that are exponentially suppressed at finite absorption. It takes simple limiting forms in the physically interesting cases of weak and strong absorption, which we have discussed in detail as well. Fluctuations in high transmission are found to be affected more strongly by finite absorption, when the background coupling exceeds certain limiting value. These results may be relevant in the reliability context of wireless communication devices <cit.>. The method developed is very flexible in incorporating physical properties of the system. In particular, we have neglected absorption of the transmission line itself, however, the latter can be important for experimental realisations, e.g., including ongoing research with chaotic reverberation chambers. Such an extra damping can be naturally accommodated into the theory by a simple rescaling procedure as outlined in Appendix <ref>. Following <cit.>, one can also include effects due to nonuniform absorption in the environment. The method can be generalised to other types of the chaotic background (e.g., without time-reversal) as well as to multichannel transmission, where the complete characterisation of both reflection and total transmission in terms of their joint distribution is actually possible and will be reported elsewhere <cit.>. Therefore, we expect our results to find further applications in studying wave propagation with complex environments.§ ACKNOWLEDGMENTS Three of us (D.V.S., M.R. and U.K.) would like to acknowledge a stimulating environment during the XII Brunel-Bielefeld Workshop on Random Matrix Theory and Applications held on 9–10 December 2016 at Brunel, UK, where the work along the lines presented above was initiated. Partial financial support by Horizon 2020 the EU Research and Innovation Program under grant no. 664828 (NEMF21 <cit.>) is acknowledged with thanks.§ACCURACY OF THE INTERPOLATION FORMULA In order to ease both implementation and analytical treatment of the exact expressions, one can use a much simpler interpolating formula for function , which was suggested in <cit.>:(x) = C_γ^-1(A_γ√(γ(x+1)) + B_γ) e^-γ/4(x+1).Here, A_γ=(e^γ/2-1)/2, B_γ=1+γ/2-e^γ/2, and the normalisation constant C_γ = 4/γ (2Γ(3/2,γ/2)A_γ+e^-γ/2B_γ), with Γ(ν,α) being the upper incomplete gamma function. This formula was earlier found to work surprisingly well when compared to the exact result <cit.>. Here we show that the level of agreement with the full implementation of the exactyields equally good results.Figure <ref> shows two types of the three analytical curves derived from Eqs. (<ref>) and (<ref>): one time calculated usingand one time calculated by . The biggest deviations can be found towards 0 and 1 or around r_0^2 in the reflection distributions. However, the overall accuracy of the interpolation formula is very good. §ABSORPTION OF THE TRANSMISSION LINE As mentioned in the main text, the derived distributions of transmission and reflection do not include a possible absorption of the single level . The latter can be easily incorporated by shifting →-(i/2)^(0) in Eqs. (<ref>) and (<ref>), where the absorption width ^(0) of the transmission line is generally different from that of the background. This amounts to replacing →+^(0) and thereby the channel coupling constants a and b bya' = a √(1 + ^(0)/),b' = b √(1 + ^(0)/).This results in the following rescaling of the parametersT_0→ T'_0 = T_0 (1+^(0)/)^-2 η →η'= η(1 + ^(0)/)^-1in the expressions (<ref>) and (<ref>). Using these variables allows us to predict the corresponding distributions also in the case of finite absorption of the transmission line.53 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Beenakker(1997)]bee97 author author C. W. J.Beenakker, title title Random-matrix theory of quantum transport, 10.1103/RevModPhys.69.731 journal journal Rev. Mod. Phys. volume 69, pages 731 (year 1997)NoStop [Alhassid(2000)]alh00 author author Y. Alhassid, title title The statistical theory of quantum dots, 10.1103/RevModPhys.72.895 journal journal Rev. Mod. Phys. volume 72, pages 895 (year 2000)NoStop [Mello and Kumar(2004)]mel04b author author P. A. Mello and author N. Kumar,10.1093/acprof:oso/9780198525820.001.0001 title Quantum Transport in Mesoscopic Systems: Complexity and Statistical Fluctuations. (publisher Oxford University Press, address Oxford, year 2004)NoStop [Tulino and Verdú(2004)]tul04 author author A. M. Tulino and author S. Verdú, title title Random matrix theory and wireless communications, 10.1561/0100000001 journal journal Foundations and Trends in Communications and Information Theory volume 1,pages 1 (year 2004)NoStop [Couillet and Debbah(2011)]cou11 author author R. Couillet and author M. Debbah, @nooptitle Random Matrix Methods for Wireless Communications (publisher Cambridge University Press, address Cambridge, year 2011)NoStop [Gesbert et al.(2000)Gesbert, Bolcskei, Gore, andPaulraj]ges00 author author D. Gesbert, author H. Bolcskei, author D. Gore,and author A. Paulraj, title title Mimo wireless channels: capacity and performance prediction, in 10.1109/GLOCOM.2000.891304 booktitle Global Telecommunications Conference, 2000. GLOBECOM '00. IEEE, Vol. volume 2 (year 2000) p.pages 1083NoStop [Matthaiou et al.(2010)Matthaiou, Chatzidiamantis, Karagiannidis,and Nossek]mat10 author author M. Matthaiou, author N. D. Chatzidiamantis, author G. K. Karagiannidis,and author J. A.Nossek, title title On the capacity of generalized- k fading mimo channels, 10.1109/TSP.2010.2058108 journal journal IEEE Transactions on Signal Processing volume 58,pages 5939 (year 2010)NoStop [Biglieri et al.(2007)Biglieri, Calderbank, Constantinides, Goldsmith, Paulraj, and Poor]big07 author author E. Biglieri, author R. Calderbank, author A. Constantinides, author A. Goldsmith, author A. Paulraj,and author H. V. Poor,@nooptitle MIMO Wireless Communications(publisher Cambridge University Press, address Cambridge, year 2007)NoStop [Bliss and Govindasamy(2013)]bli13 author author D. W. Bliss and author S. Govindasamy, @nooptitle Adaptive Wireless Communications, MIMO Channels and Networks (publisher Cambridge University Press, address Cambridge, year 2013)NoStop [Karadimitrakis et al.(2016)Karadimitrakis, Moustakas, Hafermann, andMueller]kara16 author author A. Karadimitrakis, author A. L. Moustakas, author H. Hafermann,and author A. Mueller, title title Optical fiber mimo channel model and its analysis, in 10.1109/ISIT.2016.7541682 booktitle 2016 IEEE International Symposium on Information Theory (ISIT) (year 2016) pp. pages 2164–2168NoStop [Maryenko et al.(2012)Maryenko, Ospald, v. Klitzing, Smet, Metzger, Fleischmann, Geisel, and Umansky]mar12 author author D. Maryenko, author F. Ospald, author K. v. Klitzing, author J. H. Smet, author J. J. Metzger, author R. Fleischmann, author T. Geisel,and author V. Umansky, title title How branching can change the conductance of ballistic semiconductor devices, 10.1103/PhysRevB.85.195329 journal journal Phys. Rev. B volume 85, pages 195329 (year 2012)NoStop [Haake(2001)]haa01b author author F. Haake, 10.1007/978-3-642-05428-0 title Quantum Signatures of Chaos. 2nd edition (publisher Springer, address Berlin, year 2001)NoStop [Mehta(2004)]meh04 author author M. L. Mehta, @nooptitle Random Matrices. 3rd Edition (publisher Academic Press, address San Diego, year 2004)NoStop [Guhr et al.(1998)Guhr, Müller-Groeling, and Weidenmüller]guhr98 author author T. Guhr, author A. Müller-Groeling,and author H. A.Weidenmüller, title title Random matrix theories in Quantum Physics: Common concepts,10.1016/S0370-1573(97)00088-4 journal journal Phys. Rep. volume 299,pages 189 (year 1998)NoStop [Stöckmann(2007)]stoe07b author author H.-J. Stöckmann, 10.2277/0521027152 title Quantum Chaos - An Introduction (publisher University Press, address Cambridge, year 2007)NoStop [Mahaux and Weidenmüller(1969)]mah69 author author C. Mahaux and author H. A. Weidenmüller, 10.1126/science.167.3919.860-a title Shell-Model Approach to Nuclear Reactions (publisher North-Holland, address Amsterdam, year 1969)NoStop [Verbaarschot et al.(1985)Verbaarschot, Weidenmüller, and Zirnbauer]ver85a author author J. J. M.Verbaarschot, author H. A.Weidenmüller,and author M. R. Zirnbauer, title title Grassmann integration in stochastic quantum physics: The case of compound-nucleus scattering, 10.1016/0370-1573(85)90070-5 journal journal Phys. Rep. volume 129, pages 367 (year 1985)NoStop [Sokolov and Zelevinsky(1989)]sok89 author author V. V. Sokolov and author V. G. Zelevinsky, title title Dynamics and statistics of unstable quantum states, 10.1016/0375-9474(89)90558-7 journal journal Nucl. Phys. A volume 504, pages 562 (year 1989)NoStop [Fyodorov and Sommers(1997)]fyod97 author author Y. V. Fyodorov and author H.-J. Sommers, title title Statistics of resonance poles, phase shifts and time delays in quantum chaotic scattering: Random matrix approach for systems with broken time-reversal invariance,10.1063/1.531919 journal journal J. Math. Phys. volume 38, pages 1918 (year 1997)NoStop [Mitchell et al.(2010)Mitchell, Richter, and Weidenmüller]mitc10 author author G. E. Mitchell, author A. Richter, and author H. A. Weidenmüller, title title Random matrices and chaos in nuclear physics: Nuclear reactions, 10.1103/RevModPhys.82.2845 journal journal Rev. Mod. Phys. volume 82, pages 2845–2901 (year 2010)NoStop [Fyodorov and Savin(2011)]fyod11ox author author Y. V. Fyodorov and author D. V. Savin, title title Resonance scattering of waves in chaotic systems, in @noopbooktitle The Oxford Handbook of Random Matrix Theory,editor edited by editor G. Akemann, editor J. Baik,and editor P. Di Francesco (publisher Oxford University Press, UK, year 2011) Chap. chapter 34, pp.pages 703–722 [arXiv:1003.0702]NoStop [Fyodorov et al.(2005)Fyodorov, Savin, and Sommers]fyod05 author author Y. V. Fyodorov, author D. V. Savin,and author H.-J. Sommers,title title Scattering, reflection and impedance of waves in chaotic and disordered systems with absorption,10.1088/0305-4470/38/49/017 journal journal J. Phys. A: Math. Gen. volume 38,pages 10731 (year 2005)NoStop [Kumar et al.(2013)Kumar, Nock, Sommers, Guhr, Dietz, Miski-Oglu, Richter,and Schäfer]kuma13 author author S. Kumar, author A. Nock, author H.-J. Sommers, author T. Guhr, author B. Dietz, author M. Miski-Oglu, author A. Richter,and author F. Schäfer, title title Distribution of scattering matrix elements in quantum chaotic scattering, 10.1103/PhysRevLett.111.030403 journal journal Phys. Rev. Lett. volume 111, pages 030403 (year 2013)NoStop [Hemmady et al.(2005)Hemmady, Zheng, Ott, Antonsen, and Anlage]hem05a author author S. Hemmady, author X. Zheng, author E. Ott, author T. M. Antonsen,and author S. M. Anlage, title title Universal impedance fluctuations in wave chaotic systems, 10.1103/PhysRevLett.94.014102 journal journal Phys. Rev. Lett. volume 94, pages 014102 (year 2005)NoStop [Hemmady et al.(2006)Hemmady, Zheng, Hart, Antonsen, Ott, and Anlage]hemm06 author author S. Hemmady, author X. Zheng, author J. Hart, author T. M. Antonsen, author E. Ott,and author S. M. Anlage, title title Universal properties of two-port scattering, impedance, and admittance matrices of wave-chaotic systems, 10.1103/PhysRevE.74.036213 journal journal Phys. Rev. E volume 74, pages 036213 (year 2006)NoStop [Kuhl et al.(2005a)Kuhl, Martínez-Mares, Méndez-Sánchez, andStöckmann]kuhl05 author author U. Kuhl, author M. Martínez-Mares, author R. A. Méndez-Sánchez,and author H.-J.Stöckmann, title title Direct processes in chaotic microwave cavities in the presence of absorption, 10.1103/PhysRevLett.94.144101 journal journal Phys. Rev. Lett. volume 94, pages 144101 (year 2005a)NoStop [Kuhl et al.(2005b)Kuhl, Stöckmann, and Weaver]kuhl05a author author U. Kuhl, author H.-J. Stöckmann,and author R. Weaver, title title Classical wave experiments on chaotic scattering, 10.1088/0305-4470/38/49/001 journal journal J. Phys. A: Math. Gen. volume 38, pages 10433 (year 2005b)NoStop [Dietz et al.(2010)Dietz, Friedrich, Harney, Miski-Oglu, Richter, Schäfer, andWeidenmüller]die10b author author B. Dietz, author T. Friedrich, author H. L. Harney, author M. Miski-Oglu, author A. Richter, author F. Schäfer,and author H. A. Weidenmüller, title title Quantum chaotic scattering in microwave resonators,10.1103/PhysRevE.81.036205 journal journal Phys. Rev. E volume 81, pages 036205 (year 2010)NoStop [Kuhl et al.(2013)Kuhl, Legrand, and Mortessagne]kuh13 author author U. Kuhl, author O. Legrand, and author F. Mortessagne,title title Microwave experiments using open chaotic cavities in the realm of the effective Hamiltonian formalism, 10.1002/prop.201200101 journal journal Fortschritte der Physik volume 61, pages 404 (year 2013)NoStop [Gradoni et al.(2014)Gradoni, Yeh, Xiao, Antonsen, Anlage, and Ott]grad14 author author G. Gradoni, author J.-H. Yeh, author B. Xiao, author T. M. Antonsen, author S. M. Anlage,and author E. Ott, title title Predicting the statistics of wave transport through chaotic cavities by the random coupling model: A review and recent progress, http://dx.doi.org/10.1016/j.wavemoti.2014.02.003 journal journal Wave Motion volume 51,pages 606 (year 2014)NoStop [Engelbrecht and Weidenmüller(1973)]enge73 author author C. A. Engelbrecht and author H. A. Weidenmüller, title title Hauser-Feshbach theory and Ericson fluctuations in the presence of direct reactions, 10.1103/PhysRevC.8.859 journal journal Phys. Rev. C volume 8, pages 859 (year 1973)NoStop [Bohr and Mottelson(1969)]Bohr author author A. Bohr and author B. R. Mottelson, @nooptitle Nuclear Structure(publisher Benjamin, address New York, year 1969)NoStop [Harney et al.(1986)Harney, Richter, and Weidenmüller]harn86 author author H. L. Harney, author A. Richter, and author H. A. Weidenmüller, title title Breaking of isospin symmetry in compound-nucleus reactions, 10.1103/RevModPhys.58.607 journal journal Rev. Mod. Phys. volume 58, pages 607 (year 1986)NoStop [Sokolov and Zelevinsky(1997)]soko97 author author V. V. Sokolov and author V. Zelevinsky, title title Simple mode on a highly excited background: Collective strength and damping in the continuum, 10.1103/PhysRevC.56.311 journal journal Phys. Rev. C volume 56,pages 311 (year 1997)NoStop [Gu and Weidenmüller(1999)]gu99 author author J.-Z. Gu and author H.A. Weidenmüller, title title Decay out of a superdeformed band, 10.1016/S0375-9474(99)00362-0 journal journal Nucl. Phys. A volume 660, pages 197 (year 1999)NoStop [Savin and Sommers(2003)]sav03b author author D. V. Savin and author H.-J. Sommers, title title Fluctuations of delay times in chaotic cavities with absorption, 10.1103/PhysRevE.68.036211 journal journal Phys. Rev. E volume 68, pages 036211 (year 2003)NoStop [Sokolov(2010)]soko10 author author V. V. Sokolov, title title Ballistic electron quantum transport in the presence of a disordered background,10.1088/1751-8113/43/26/265102 journal journal J. Phys. A: Math. Theor. volume 43, pages 265102 (year 2010)NoStop [Zelevinsky and Volya(2016)]zele16 author author V. Zelevinsky and author A. Volya, title title Chaotic features of nuclear structure and dynamics: selected topics, 10.1088/0031-8949/91/3/033006 journal journal Phys. Scr. volume 91, pages 033006 (year 2016)NoStop [Sokolov and Zelevinsky(1992)]soko92 author author V. V. Sokolov and author V. G. Zelevinsky, title title Collective dynamics of unstable quantum states, 10.1016/0003-4916(92)90180-T journal journal Ann. Phys. (N.Y.) volume 216, pages 323 (year 1992)NoStop [Fyodorov and Savin(2004)]fyo04 author author Y. V. Fyodorov and author D. V. Savin, title title Statistics of impedance, local density of states, and reflection in quantum chaotic systems with absorption, 10.1134/1.1868794 journal journal JETP Lett. volume 80,pages 725 (year 2004)NoStop [Nishioka and Weidenmüller(1985)]nis85 author author H. Nishioka and author H. A. Weidenmüller, title title Compound-nucleus scattering in the presence of direct reactions, 10.1016/0370-2693(85)91525-4 journal journal Phys. Lett. B volume 157, pages 101 (year 1985)NoStop [Nock et al.(2014)Nock, Kumar, Sommers, and Guhr]nock14 author author A. Nock, author S. Kumar, author H.-J. Sommers,andauthor T. Guhr, title title Distributions of off-diagonal scattering matrix elements: Exact results, http://dx.doi.org/10.1016/j.aop.2013.11.006 journal journal Ann. Phys. volume 342, pages 103 (year 2014)NoStop [Savin et al.(2005)Savin, Sommers, and Fyodorov]sav05 author author D. V. Savin, author H.-J. Sommers,and author Y. V. Fyodorov,title title Universal statistics of the local Green function in quantum chaotic systems with absorption, 10.1134/1.2150877 journal journal JETP Lett. volume 82, pages 544 (year 2005)NoStop [Friedman and Mello(1985)]frie85 author author W. A. Friedman and author P. A. Mello, title title Information theory and statistical nuclear reactions: II. Many-channel case and Hauser-Feshbach formula, 10.1016/0003-4916(85)90081-8 journal journal Ann. Phys. (N.Y.) volume 161, pages 276 (year 1985)NoStop [Savin et al.(2001)Savin, Fyodorov, and Sommers]savi01 author author D. V. Savin, author Y. V. Fyodorov,and author H.-J. Sommers,title title Reducing nonideal to ideal coupling in random matrix description of chaotic scattering: Application to the time-delay problem, 10.1103/PhysRevE.63.035202 journal journal Phys. Rev. E volume 63, pages 035202(R) (year 2001)NoStop [Büttiker(1986)]buet86 author author M. Büttiker, title title Role of quantum coherence in series resistors, 10.1103/PhysRevB.33.3020 journal journal Phys. Rev. B volume 33, pages 3020 (year 1986)NoStop [Brouwer and Beenakker(1997)]brou97c author author P. W. Brouwer and author C. W. J. Beenakker, title title Voltage-probe and imaginary-potential models for dephasing in a chaotic quantum dot,10.1103/PhysRevB.55.4695 journal journal Phys. Rev. B volume 55, pages 4695 (year 1997), note [Erratum: ibid. 66, 209901(E) (2002)]NoStop [Mello(1995)]mell95 author author P. A. Mello, title title Theory of random matrices: spectral statistics and scattering problems, in @noopbooktitle Mesoscopic Quantum Physics, series and number Proceedings of the Les-Houches Summer School, Session LXI, editor edited by editor E. Akkermans, editor G. Montambaux, editor J.-L. Pichard,and editor J. Zinn-Justin (publisher Elsevier, year 1995) p. pages 435NoStop [not(a)]note_phase note Note that the average ⟨S_bg⟩=(1-η)/(1+η). At η=1, ⟨S_bg⟩=0 implies the uniform distribution for θ and thus the Cauchy distribution for u=-arctanθ/2.Stop [not(b)]note_appr note We note that although both expressions (<ref>) and (<ref>) are notnormalised, they reproduce the exact asymptotic behaviours near the edges. The accuracy of the approximations can be systematically improved by keeping further order terms in 1/p (or in p) for weak (or strong) absorption when performing the integration in Eq. (<ref>).Stop [Rozhkov et al.(2004)Rozhkov, Fyodorov, and Weaver]rozh04 author author I. Rozhkov, author Y. V. Fyodorov,and author R. L. Weaver, title title Variance of transmitted power in multichannel dissipative ergodic structures invariant under time reversal, 10.1103/PhysRevE.69.036206 journal journal Phys. Rev. E volume 69, pages 036206 (year 2004)NoStop [Savin et al.(2006)Savin, Legrand, and Mortessagne]savi06b author author D. V. Savin, author O. Legrand, and author F. Mortessagne,title title Inhomogeneous losses and complexness of wave functions in chaotic cavities, 10.1209/epl/i2006-10358-3 journal journal Europhys. Lett. volume 76, pages 774 (year 2006)NoStop [Savin(2017)]savi17 author author D. V. Savin, @noopjournal journal to be published(year 2017)NoStop [nem()]nemf21 @nooptitle Nemf21: Noisy Electromagnetic Fields – A Technological Platform for Chip-to-Chip Communication in the 21^st Century.note See <http://www.nemf21.org> for detail on the reseach programme and activitiesNoStop
http://arxiv.org/abs/1704.08677v2
{ "authors": [ "Dmitry V. Savin", "Martin Richter", "Ulrich Kuhl", "Olivier Legrand", "Fabrice Mortessagne" ], "categories": [ "cond-mat.mes-hall", "math-ph", "math.MP", "nucl-th", "quant-ph" ], "primary_category": "cond-mat.mes-hall", "published": "20170427174157", "title": "Fluctuations in an established transmission in the presence of a complex environment" }
1Department of Physics, Technion – Israel Institute of Technology, Haifa 32000, Israel; [email protected], [email protected], [email protected] We conduct three-dimensional hydrodynamical simulations of energy deposition into the envelope of a red giant star as a result of the merger of two close main sequence stars or brown dwarfs, and show that the outcome is a highly non-spherical outflow. Such a violent interaction of a triple stellar system can explain the formation of `messy', i.e., lacking any kind of symmetry, planetary nebulae (PNe) and similar nebulae around evolved stars. We do not simulate the merging process, but simply assume that after the tight binary system enters the envelope of the giant star the interaction with the envelope causes the two components, stars or brown dwarfs, to merge and liberate gravitational energy. We deposit the energy over a time period of about nine hours, which is about one per cent of the orbital period of the merger product around the centre of the giant star. The ejection of the fast hot gas and its collision with previously ejected mass are very likely to lead to a transient event, i.e., an intermediate luminosity optical transient (ILOT).§ INTRODUCTION The mass loss rate and outflow geometry from a giant star can be substantially influenced by the presence of close objects, i.e., planets, brown dwarfs and/or stars, that interact with the giant star. Planetary nebulae (PNe) is the most studied group of objects that are shaped by the interaction of their progenitor, an asymptotic giant branch (AGB) star, with stellar companions (e.g, ) or with planets (e.g, ). Following early theoretical and observational studies <cit.> observations of tens of PNe with central binary systems, in particular in recent years (e.g., , for a partial sample just from 2015 on), have solidified the binary interaction model for shaping PNe. Single stars cannot lead to the variety of shapes of PNe (e.g., ). But what about triple stellar (or sub-stellar) systems? Triple systems attract attention in other types of processes and objects (e.g., ).A small number of papers discussed the shaping of PNe by triple stellar systems (e.g., ). Note though that the claim made by <cit.> for a triple stellar system inside a PN was refuted recently by <cit.>. <cit.> proposed that the progenitor of the PN SuWt 2 engulfed a tight binary system of two A-type stars, and that the A-stars binary system survived the common envelope evolution. They further suggested that triple stellar interaction might eject a high-density equatorial ring (see also ). However, <cit.> concluded recently that the binary system of A-type stars is a field star system, which happens to be along the line of sight to SuWt 2.We consider triple systems for which two orbital planes can be defined. One is that of the more tight binary system, and the second one is that of the tight binary system motion around the centre of mass with the third object. When the two orbital planes are inclined to each other, the mass loss geometry is likely to depart from any kind of symmetry, and so is the descendant PN. The PN will lack any kind of symmetry; nor point-symmetry, nor axial-symmetry, and nor mirror symmetry. We term this a `messy PN' <cit.>.Most nebulae do not require triple stellar interaction. <cit.> suggest that the Great Eruption of Eta Carinae was caused by a merger in a triple stellar system. But there are claims that a binary system alone can explain the the Great Eruption of Eta Carinae and the formation of the bipolar structure of the Homunculus that was formed during the Great Eruption <cit.>. The binary model can explain also earlier <cit.> and later (the Lesser Eruption; ) outbursts. In the present study we consider a different type of triple-stellar evolution than that considered by <cit.>.When the two orbital planes of the triple system coincide the interaction leads to a mass loss process that has a symmetry plane, although it might depart from axisymmetry. Binary systems can also lead to departure from axisymmetry (e.g., ). However, binary systems are unlikely to form messy PNe that lack any symmetry. In a recent paper <cit.> conduct 3D hydrodynamical simulation of mass transfer from an AGB star to a close companion. They demonstrate that when the AGB star losses mass in a short (5 years) burst, a very complicated and highly asymmetric mass loss takes place. This binary process still retains a symmetry about the equatorial plane. They do not study the launching of jets. Precessing jets could lead to a departure from any symmetry in that case of a short burst, and lead to the formation of a somewhat messy PN.In many cases the more compact companion accretes mass from the AGB star. The accreted mass has high specific angular momentum and it forms an accretion disk around the companion, that in turn launches jets that shape the descendant PN (e.g., , out of many more papers). Triple stellar systems can also launch jets. <cit.> simulate the flow structure described by <cit.>, where a tight binary system orbits an AGB star and accretes mass from the AGB wind. Because the orbital plane of the tight binary system is not parallel to the orbital plane around the AGB star in that setting, the jets' axis is not perpendicular to the orbital plane. This triple stellar interaction forms a messy nebula <cit.>.We here study another process that is likely to form a messy nebula. This is the evolutionary channel of a merger of a tight binary system inside the envelope of a giant star <cit.>. The components of the tight binary system can be stars, brown dwarfs, or massive planets. Although the tight binary system influences the mass loss process before it enters the envelope, e.g., by tidal interaction, spinning-up the envelope, and launching jets, such a full study requires huge amount of computer resources. For resources constraints, we simply take the energy that is expected to be released by the merger process, and deposit it inside the envelope of the giant star. In a future paper we will also consider some of the other effects mentioned above. We describe the initial setting and the three-dimensional numerical code in section <ref>. We then describe the numerical results in section <ref>. We summarize our results in section <ref>. § NUMERICAL SET-UPWe run the stellar evolution code<cit.> to obtain a spherical AGB model with zero-age-main-sequence mass of M_ZAMS=4 M_. We let the star evolve until it reaches the AGB stage after 3×10^8 years. At that time thestellar radius is R_AGB=100 R_, and its effective temperature is T_eff= 3400K. We then import the spherical AGB model, namely, the profiles of density and pressure into the three-dimensional hydrodynamical code pluto <cit.>. The full grid is taken as a cube with side length of 400 R_. We employ an adaptive-mesh-refinement (AMR) grid with five refinement levels. The base grid resolution is 1/64 of the grid length (i.e. 4.35× 10^11), and the highest resolution is 2^4 times smaller (i.e. 2.72× 10^10). The centre of the AGB star and the centre of the grid coincide at (x,y,z)=(0,0,0). We use an equation of state of an ideal gas with adiabatic index γ=5/3.The code runs on time steps determined by the Courant - Friedrichs - Lewy (CFL) condition, and thus the high densities in the centre of the primary requires very short time steps. As we are not interested in the inner parts of the AGB star, its inner 5% in radius (5 R_) were replaced with a constant density, pressure, and temperature sphere. This allows the code to run with larger time steps than those required for simulating this inner region. The dynamic field of the local gravitational acceleration were inserted as if the inner parts of the AGB star were not truncated. The initial stellar structure is in hydrostatic equilibrium. We keep the gravitational field at its initial value (at t=0) throughout the entire calculation. Namely, we do not include neither the change in gravity that results from the deformed envelope nor the gravity of the merger product.We consider a scenario where a tight binary system composed of two main sequence stars or two massive brown dwarfs of masses M_2 and M_3, enters the envelope of the AGB star, i.e., enters a common envelope evolution (CEE). Following <cit.>,we assume that during the CEE gravitational drag and mass accretion cause the tight binary system both to spiral-in toward the AGB centre, and might cause the two main sequence stars (or brown dwarfs) to merge with each other. The simulation starts with the assumed merger of the two stars of the tight binary system at r=70R_. We set the coordinate system such that at t=0 the merger takes place at (x,y,z)_ mer-0=(70 R_⊙,0,0). At that orbital separation the Keplerian orbital period of the merger product around the AGB core is about 36.A few words here are in place on the initial conditions. We start the simulations with the merger process taking place inside the envelope, and ignore the interaction before that stage, e.g., the spin-up of the envelope by the tight binary system, jets that might have been launched by an accretion disk around one (or two) star of the tight binary system. In addition, the entrance to the common envelope can substantially distort the envelope and the mass loss process (e.g.,for a scenario explaining the rings of SN 1987A). The reason for this initial set up are numerical limitations. However, the inclusion of envelope rotation and envelope expansion due to the spiraling-in process, would most likely make the envelope more vulnerable to perturbations by the merger process. Namely, even a merger of two brown dwarfs or very low mass main sequence stars (see also below) will lead to a very messy PN.The merger process liberates an energy of E_ mer≈ 10^48 (M_2M_3/M^2_⊙)(R/R_⊙)^-1, where R is the radius of the larger star of the two merging objects. A large fraction of the energy released in the merger process will be channeled to inflate the merger product, and the rest will go into the envelope of the giant star. We do not know what this fraction is, but as a conservative approach we assume that only a small fraction of the merger energy is deposited in the envelope of the giant star. If this fraction is larger than what we assume, then the same outcome can result from the merger of an even lower mass tight binary system. In the first case, designated as the fiducial run, we inject a mass of M_ mer = 0.1 M_⊙ into the AGB envelope, and that mass carries an energy of E_ mer = 5 × 10^45. This energy corresponds to several percents of the energy that two main sequence stars of masses 0.2 M_⊙ liberate. This also corresponds to the energy that is liberated from the merger of two massive brown dwarfs, but the brown dwarfs will inject less mass into the envelope. We simulated a second case with an injected mass of M_ mer = 0.1 M_⊙, but with a larger energy of E_ mer = 2.5 × 10^46.The merger remnant continues its Keplerian motion and exerts gravitational force on the envelope. We do not consider this in the present paper. The gravitational influence of the tight binary system before merger, and of the merger product after merger, will be studied in a future paper.The energy and mass are injected over a time period of T_ mer = 8.8 hours, and within a sphere of R_ mer = 0.2 R_, with a diameter of 6 basic cells. The energy is inserted as a thermal energy of the injected gas. The injection time is taken to be several times the orbital period of the tight binary system. During the injection time period, the merger product has moved a distance of about 0.01 times the circumference (1 per cent of an orbit).To verify numerical stability, we run the simulation without the binary for ten dynamical times of the giant star. We found no noticeable change in the stellar variables,i.e. T(r), P(r), and ρ(r).We end the simulations when a relatively significant amount of mass has left the grid, as we cannot follow the fall back of the bound gas. § RESULTSWe first present the outflow that results from the injection of mass and energy of the fiducial (low energy) run into the AGB envelope. The mass and energy injection starts at t=0 at the location of the merger product at that time (x,y,z)_ mer-0=(70 R_⊙,0,0), which we mark with a black dot in the figures. The injection process lasts for a time of T_ mer = 8.8 hours, which is about 1 per cent of the orbital period. In the first three figures (igs. <ref>-<ref>) we present some flow properties in the orbital plane z=0. The merger product moves counterclockwise around the centre of the giant star, and its location at each time is marked with a cyan dot.From Figs. <ref>-<ref> we notice the following flow properties. The injection of the energy leads to a shock wave that propagates out, i.e., we basically set an explosion in the envelope. The initial phase that lasts for about two days consists of pure expansion of the exploding sphere of mass and energy. The injected mass suffers an adiabatic cooling. This is seen as a low temperature region (blue) in Fig. <ref> in the two upper panels.The high temperature regions in the outer envelope are post-shock regions. Seen inFig. <ref>, a shock is propagating in the outer parts of the envelope around the centre. When it closes on itself on the other side of the star it forms a gas prominence and high pressure region, as seen on the lower left corner of the two lower panels of Fig. <ref>. The high temperature region can lead to a transient event, as we discuss later.The blast wave that is formed by the injection of energy expands to all directions. About half of the injected material and the envelope mass that is directly pushed by the energy injection, move first toward the dense parts of the AGB envelope. The kinetic energy of this gas is dissipated in the envelope and does not eject mass to infinity. It rather drives shock waves that travel into and around the star and converge on its opposite side, as mentioned above. The low density gas from near the merger site pushes onto the denser gas toward the centre and accelerates it inward, leading to the development of Rayleigh-Taylor (RT) instabilities. These are clearly seen in panels (b) and (c) of Fig. <ref>.In Fig. <ref> we present the density in the meridional plane y=0, i.e., a plane perpendicular to the equatorial plane that goes through the centre of the AGB star. As expected, the flow is symmetric on the two sides of the equatorial plane. We can see the disturbance circling the star on both sides, i.e., above and below the equatorial plane. In this first study where we ignore the gravity of the merger remnant we see only the influence of the injected energy and mass. We can see that the expanding gas lags behind the Keplerian motion of the merger product (the cyan dot on the figures).By the end of this run, about 0.03 M_ of gas had flown out of the numerical grid with a positive energy. More material can escape the gravitational potential barrier with the aid of radiation pressure on dust that is expected to be formed in the cooling gas. As we do not include radiation pressure on the lifted gas, we somewhat underestimate the unbound mass.The injected energy of E_ mer = 5 × 10^45 can be accounted for even by the merger of two brown dwarfs. The merger of two low mass main sequence stars, of masses M_1 ≈ M_2 ≃ 0.1-0.2 M_ will release much larger amount of energy. We simulated one such case with E_ mer = 2.5 × 10^46. The density in the equatorial plane at two times is presented in Fig. <ref>. Comparing to panels b-d in Fig. <ref>, we can immediately see that the AGB star suffers a much significant distortion. Due to numerical limitations we could not continue this run. It will be repeated in a new set of simulations that will include the self gravity of the envelope and the gravity of the merger product. The full distortion of the AGB envelope is seen the best in 3D images. In Fig. <ref> we present equi-density surfaces at six times. The formation of a messy circumstellar matter is clearly seen, mainly by following the red colour that depicts a density surface of ρ= 6 × 10^-9^-3. We end by presenting the 3D structure of the gas that was injected in the merger process. To follow this gas we use a `tracer', which is a non-physical variable moving with specifically designated gas, the injected mass in the present case. The tracer's value at each point indicates the mass fraction of the injected gas at that point. Thus, regions occupied by the gas ejected from the merger process are marked by a tracer value of ξ=1, and regions where the ejected gas has not reached are marked by ξ=0. Figure <ref> depicts the surface corresponding to a tracer value of ξ=0.5 at t = 37. This surface delineates the boundary between the gas ejected in the merger process and its surroundings. The colour of each point on the surface indicates the local density. The expanding gas ejected from the merger process can not penetrate the dense layers of the AGB envelope, and it expands radially outwards in a direction opposite the centre. At a late time the gas can be seen to be concentrated, very crudely, on a surface of a pyramid. There is a small azimuthal velocity component due to the orbital motion during the merger process.§ DISCUSSION AND SUMMARY Many rare transient events, some that include the explosion of stars and some that include the complete destruction of stars by another object, have been discovered in recent years. Many more are expected with coming telescopes and surveys. The goal of this study is to present one such very rare event that, nonetheless, might one day be detected as an ILOT. As well, this process can lead to the formation of a very asymmetrical (messy) circumstellar matter, e.g., a messy PN. The process is that of a tightbinary system that enters the envelope of a giant star. Because of the friction inside the envelope, the two stars of the tight binary system merge and release energy and mass inside the envelope. The tight binary system can be composed of any two non-giant stars, e.g., neutron stars, white dwarfs, main sequence stars, horizontal branch stars, and/or brown dwarfs, and any combination of these. The secondary star might even be a massive planet. In the present study, still preliminary, we simply injected energy inside the envelope during a very short time (about 9 hours) relative to the orbital period (about 36 days). We included the gravitational field of the giant star, but we kept it constant at its initial value. To isolate the influence of the inserted energy, and due to limited computer resources, we neglect in this first study the following processes. We did not consider the influence of the tight binary system on the structure of the envelope prior to the merger. The envelope is expected to rotate and to have an oblate structure. We started at t=0 with a spherical giant envelope. We did not include the gravity of the merger product and the changing gravity of the deformed envelope (self-gravity). The merger processes might lead to the formation of an accretion disk around the surviving star. This disk might launch jets. We did not study this process here. We plan future simulations where we will include these effects. Another effect that we did not include, and which is very complicated to include, is radiative transfer.We present the evolution of the flow for the low-energy (fiducial) simulation in Figs. <ref> - <ref>, and in the 3D images in Figs. <ref> and <ref>. We note that even the merger of two brown dwarfs can release the injected energy of E_ mer = 5 × 10^45 in this case. We summarize the main results of this simulation as follows.(1) An ILOT. A shock propagates on the outskirts of the giant envelope. This is best seen by the deep red colour in Fig. <ref>. In addition to adiabatic cooling, the hot gas is expected to cool by radiation. This will lead to an outburst in the visible that will evolve to red as dust is formed in the ejecta. This ILOT will last for several weeks. The exact light curve depends on the energy injected and how deep inside the envelope it is injected. This is not studied in the present paper. (2) A very asymmetrical mass ejection. The flow that results from the merger process does not have a symmetry around an axis that goes through the core and the location of the merger process because the energy is being injected with the velocity of the tight binary system around the giant star. As can be see in the different figures, a messy outflow is formed. This will lead to the formation of a messy nebula, and hence this is one of the triple stellar evolutionary routes that lead to the formation of messy PNe <cit.>. In the present simulations the flow possesses a symmetry about the orbital plane. However, in reality, the orbital plane of the tight binary system does not need to be in the orbital plane of the triple stellar system. In that case there will be a departure from a plane symmetry as well.(3) Ejected mass.The shock that propagates through the giant envelope causes prominences and ejection of mass for about several weeks. Part of the mass leaves the grid with a positive energy, and hence it will escape the system. In the simulation that we conducted, a relatively small amount of energy was deposited within the giant envelope, and hence only about 0.03 M_ has a positive energy and leaves the system. Although not much mass, it is enough to leave an imprint on the descendant nebula.We run one case with five times larger injected energy. In Fig. <ref> we present the first 13 days of the simulation with a larger injected energy of E_ mer = 2.5 × 10^46. As expected, the envelope is much more distorted and the outflow from the star is messy already at this early time. Due to numerical limitations we could not follow this highly distorted envelope. We will study such violent outbursts in a future paper where we will modify the numerical code to include the self-gravity of the highly distorted envelope.A general and a wider short summary of our study can be phrased as follows. Rare triple stellar evolution routes, such as the one studied here, extend the domain of the binary interaction to account for different types of astrophysical objects, like ILOTs and shaping of circumstellar nebulae, and strengthen the binary interaction model. § ACKNOWLEDGMENTS We thank an anonymous referee for useful comments.We thank Efrat Sabach for her help with the AGB model, and Michael Refaelovich for his assistance. This research was supported by the Prof. A. Pazy Research Foundation. N.S. is supported by the Charles Wolfson Academic Chair.tocsectionReferences [Akashi & Soker(2008)]AkashiSoker2008 Akashi, M., & Soker, N. 2008, , 391, 1063[Akashi & Soker(2017)]AkashiSoker2017 Akashi, M., & Soker, N. 2017, arXiv:1701.05460[Ali et al.(2016)]Alietal2016 Ali, A., Dopita, M. A., Basurah, H. M., Amer, M. A., Alsulami, R., & Alruhaili, A. 2016, , 462, 1393[Akras et al.(2015)]Akrasetal2015 Akras, S., Boumis, P., Meaburn, J., Alikakos, J., Lopez, J. A., Goncalves, D. R. 2015, , 452, 2911[Akras et al.(2016)]Akrasetal2016 Akras, S., Clyne, N., Boumis, P., Monteiro, H., Goncalves, D. R., Redman, M. P., & Williams, S. 2016, , 457, 3409[Aller et al.(2015a)]Alleretal2015a Aller, A., Miranda, L. F., Olguín, L., Vazquez, R., Guillen, P. F., Oreiro, R., Ulla, A., & Solano, E. 2015a, , 446, 317[Aller et al.(2015b)]Alleretal2015b Aller, A., Montesinos, B., Miranda, L. F., Solano, E., & Ulla, A. 2015b, , 448, 2822[Balick et al.(2013)]Balicketal2013 Balick, B., Huarte-Espinosa, M., Frank, A., Gomez, T., Alcolea, J., Corradi, R. L. M., & Vinkovic, D. 2013, , 772, 20[Bear & Soker(2017)]BearSoker2017 Bear, E., & Soker, N. 2017, , 837, L10[Boffin(2015)]Boffin2015 Boffin, H. 2015, 19th European Workshop on White Dwarfs, 493, 527[Boffin et al.(2012)]Boffinetal2012 Boffin, H. M. J., Miszalski, B., Rauch, T.,Jones, D., Corradi, R. L. M., Napiwotzki, R., Day-Jones, A. C., & Köppen, J. 2012, Science, 338, 773[Bond(2000)]Bond2000 Bond, H. E. 2000, Asymmetrical Planetary Nebulae II: From Origins to Microstructures, 199, 115[Bond et al.(2016)]Bondetal2016 Bond, H. E., Ciardullo, R., Esplin, T. L., Hawley, S. A., Liebert, J., & Munari, U. 2016, , 826, 139[Bond et al.(1978)]Bondetal1978 Bond, H. E., Liller, W., & Mannery, E. J. 1978, , 223, 252[Bond & Livio(1990)]BondLivio1990 Bond, H. E., & Livio, M.1990, , 355, 568[Bond et al.(2002)]Bondetal2002 Bond, H. E., O'Brien, M. S., Sion, E. M., Mullan, D. J., Exter, K., Pollacco, D. L., & Webbink, R. F. 2002, Exotic Stars as Challenges to Evolution, 279, 239[Chen et al.(2017)]Chenetal2017 Chen, Z., Frank, A., Blackman, E. G., Nordhaus, J., & Carroll-Nellenback, J. 2017, arXiv:1702.06160[Chen et al.(2016)]Chenetal2016 Chen, Z., Nordhaus, J., Frank, A., Blackman, E. G., & Balick, B. 2016, , 460, 4182[Chiotellis et al.(2016)]Chiotellisetal2016 Chiotellis, A., Boumis, P., Nanouris, N., Meaburn, J., & Dimitriadis, G. 2016, , 457, 9[Corradi et al.(2015)]Corradietal2015 Corradi, R. L. M., García-Rojas, J., Jones, D., & Rodríguez-Gil, P.2015, , 803, 99[Danehkar et al.(2013)]Danehkaretal2013 Danehkar, A., Parker, Q. A., & Ercolano, B. 2013, , 434, 1513[Decin et al.(2015)]Decinetal2015 Decin, L., Richards, A. M. S., Neufeld, D., Steffen, W., Melnick, G., & Lombaert, R. 2015, , 574, A5[De Marco(2015)]DeMarco2015 De Marco, O. 2015, in Physics of Evolved Stars - A conference dedicated to the memory of Olivier Chesneau, Eds. E. Lagadec, F. Millour and T. Lanz, EAS Publications Series, 71, 357[De Marco et al.(2015)]DeMarcoetal2015 De Marco, O., Long, J., Jacoby, G. H., Hillwig, T., Kronberger, M., Howell, S. B., Reindl, N., Margheim, S. 2015, , 448, 3587[De Marco & Soker(2011)]DeMarcoSoker2011 De Marco, O., & Soker, N.2011, , 123, 402[Douchin et al.(2015)]Douchinetal2015 Douchin, D., De Marco, O., Frew, D. J., Jacoby, G. H., Jasniewicz, G., Fitzgerald, M., Passy, J-C., Harmer, D., Hillwig, T., & Moe, M. 2015, , 448, 3132[Eggleton & Verbunt(1986)]EggletonVerbunt1986 Eggleton, P. P., & Verbunt, F. 1986, , 220, 13P[Exter et al.(2010)]Exteretal2010 Exter, K., Bond, H. E., Stassun, K. G., Smalley, B., Maxted, P. F. L., & Pollacco, D. L. 2010, , 140, 1414[Fabian & Hansen(1979)]FabianHansen1979 Fabian, A. C., & Hansen, C. J. 1979, , 187, 283[Fang et al.(2015)]Fangetal2015 Fang, X., Guerrero, M. A., Miranda, L. F., Riera, A., Velazquez, P. F., Raga, A. C. 2015, , 452, 2445[García-Rojas et al.(2016)]GarciaRojasetal2016 García-Rojas, J., Corradi, R. L. M., Monteiro, H., Jones, D., Rodriguez-Gil, P., & Cabrera-Lavers, A. 2016, , 824, L27[García-Segura et al.(2014)]GarciaSeguraetal2014 García-Segura, G., Villaver, E., Langer, N., Yoon, S.-C., & Manchado, A. 2014, , 783, 74[Gorlova et al.(2015)]Gorlovaetal2015 Gorlova, N., Van Winckel, H., Ikonnikova, N. P., Burlak, M. A., Komissarova, G. V., Jorissen, A., Gielen, C., Debosscher, J., & Degroote, P. 2015, , 451, 2462[Han et al.(1995)]Hanetal1995 Han, Z., Podsiadlowski, P., & Eggleton, P. P. 1995, , 272, 800[Hillwig et al.(2016a)]Hillwigetal2016a Hillwig, T. C., Bond, H. E., Frew, D. J., Schaub, S. C., & Bodman, E. H. L. 2016a, , 152, 34[Hillwig et al.(2015)]Hillwigetal2015 Hillwig, T. C., Frew, D. J., Louie, M.,De Marco, O., Bond, H. E., Jones, D., Schaub, S. C. 2015, , 150, 30[Hillwig et al.(2016b)]Hillwigetal2016b Hillwig, T., Jones, D., De Marco, O., Bond, H., Margheim, S., & Frew, D. 2016b, , 832, 125[Huang et al.(2016)]Huangetal2016 Huang, P.-S., Lee, C.-F., Moraghan, A., & Smith, M. 2016, , 820, 134[Huarte-Espinosa et al.(2012)]HuarteEspinosaetal2012 Huarte-Espinosa, M., Frank, A., Balick, B., Blackman, E. G., De Marco, O., Kastner, J. H., & Sahai, R.2012, , 424, 2055 [Humphreys et al.(1999)]Humphreysetal1999 Humphreys, R. M., Davidson, K., & Smith, N. 1999, , 111, 1124[Iben & Tutukov(1989)]IbenTutukov1989 Iben, I., Jr., & Tutukov, A. V. 1989, Planetary Nebulae, 131, 505[Jones(2015)]Jones2015 Jones, D. 2015, EAS Publications Series, 71, 113[Jones(2016)]Jones2016 Jones, D. 2016, Journal of Physics Conference Series, 728, 032014[Jones & Boffin(2017a)]JonesBoffin2017 Jones, D., & Boffin, H. M. J. 2017a, , 466, 2034[Jones & Boffin(2017b)]JonesBoffin2017b Jones, D., & Boffin, H. M. J. 2017b, Nature Astronomy 1, 0117[Jones et al.(2015)]Jonesetal2015 Jones, D., Boffin, H. M. J., Rodríguez-Gil, P., Wesson, R., Corradi, R. L. M., Miszalski, B., & Mohamed, S. 2015, , 580, A19[Jones et al.(2017)]Jonesetal2017 Jones, D., Van Winckel, H., Aller, A., Exter, K., & De Marco, O. 2017, arXiv:1703.05096[Jones et al.(2016)]Jonesetal2016 Jones, D., Wesson, R., García-Rojas, J., Corradi, R. L. M., & Boffin, H. M. J. 2016, , 455, 3263[Kashi & Soker(2010)]KashiSoker2010 Kashi, A., & Soker, N. 2010, , 723, 602[Kiminki et al.(2016)]Kiminkietal2016 Kiminki, M. M., Reiter, M., & Smith, N. 2016, ,[Livio & Shaviv(1975)]LivioShaviv1975 Livio, M., & Shaviv, G. 1975, , 258, 308[Madappatt et al.(2016)]Madappattetal2016 Madappatt, N., De Marco, O., & Villaver, E. 2016, ,[Manick et al.(2015)]Manicketal2015 Manick, R., Miszalski, B., & McBride, V. 2015, , 448, 1789[Martínez González et al.(2015)]Martinezetal2015 Martínez González, M. J., Asensio Ramos, A., Manso Sainz, R., Corradi, R. L. M., & Leone, F. 2015, , 574, A16[Michaely & Perets(2014)]MichaelyPerets2014 Michaely, E., & Perets, H. B. 2014, , 794, 122[Mignone et al.(2007)]Mignone2007 Mignone, A., Bodo, G., Massaglia, S., et al. 2007, , 170, 228[Miszalski et al.(2013)]Miszalskietal2013 Miszalski, B., Boffin, H. M. J., & Corradi, R. L. M. 2013, , 428, L39[Miszalski et al.(2015)]Miszalskietal2015 Miszalski, B., Manick, R., & McBride, V. 2015, in Physics of Evolved Stars - A conference dedicated to the memory of Olivier Chesneau, Eds. E. Lagadec, F. Millour and T. Lanz, EAS Publications Series, 71, 117 (arXiv:1507.07707)[Močnik et al.(2015)]Mocniketal2015 Močnik, T., Lloyd, M., Pollacco, D., & Street, R. A. 2015, , 451, 870[Montez et al.(2015)]Montezetal2015 Montez, R., Jr., Kastner, J. H., Balick, B., et al. 2015, , 800, 8[Morris(1981)]Morris1981 Morris, M. 1981, , 249, 572[Morris(1987)]Morris1987 Morris, M. 1987, , 99, 1115[Morris & Podsiadlowski(2009)]MorrisPodsiadlowski2009 Morris, T., & Podsiadlowski, P. 2009, , 399, 515[Nordhaus & Blackman(2006)]NordhausBlackman2006 Nordhaus, J., & Blackman, E. G. 2006, , 370, 2004[Paxton et al.(2011)] Paxtonetal2011 Paxton, B., Bildsten, L., Dotter, A., et al. 2011, , 192, 3[Paxton et al.(2013)] Paxtonetal2013 Paxton, B., Cantiello, M., Arras, P., et al. 2013, , 208, 4[Paxton et al.(2015)]Paxtonetal2015 Paxton, B., Marchant, P., Schwab, J., et al. 2015, , 220, 15[Paczynski(1985)]Paczynski1985 Paczynski, B. 1985, Cataclysmic Variables and Low-Mass X-ray Binaries, 113, 1[Portegies Zwart & van den Heuvel(2016)]Portegies2016 Portegies Zwart, S. F., & van den Heuvel, E. P. J. 2016, , 456, 3401[Rechy-García et al.(2017)]RechyGarciaetal2017Rechy-García, J., Velázquez, P. F., Peña, M., & Raga, A. C. 2017, , 464, 2318[Sahai et al.(2016)]Sahaietal2016 Sahai, R., Scibelli, S., & Morris, M. R. 2016, , 827, 92[Sahai & Trauger(1998)]SahaiTrauger1998 Sahai, R., & Trauger, J. T. 1998, , 116, 1357[Soker(1990)]Soker1990AJ Soker, N. 1990, , 99, 1869[Soker(1994)]Soker1994 Soker, N. 1994, , 270, 774[Soker(2004)]Soker2004 Soker, N. 2004, , 350, 1366[Soker(2016)]Soker2016triple Soker, N. 2016, , 455, 1584[Soker & Hadar(2002)]SokerHadar2002 Soker, N., & Hadar, R. 2002, , 331, 731[Soker & Harpaz(1992)]SokerHarpaz1992 Soker, N., & Harpaz, A.1992, , 104, 923[Soker et al.(1992)]Sokeretal1992 Soker, N., Zucker, D. B., & Balick, B. 1992, , 104, 2151[Tocknell et al.(2014)]Tocknelletal2014 Tocknell, J., De Marco, O., & Wardle, M. 2014, , 439, 2014[Zijlstra(2015)]Zijlstra2015 Zijlstra, A. A. 2015, , 51, 221
http://arxiv.org/abs/1704.08438v2
{ "authors": [ "Shlomi Hillel", "Ron Schreier", "Noam Soker" ], "categories": [ "astro-ph.SR" ], "primary_category": "astro-ph.SR", "published": "20170427053916", "title": "An outburst powered by the merging of two stars inside the envelope of a giant" }
[email protected] of Astronomy, Ohio State University, 140 W. 18th Ave., Columbus, OH43210, USA 2Department of Physics and Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA 3Dunlap Institute for Astronomy and Astrophysics, University of Toronto, Toronto, ON M5S 3H4, Canada 4Centre of Planetary Science, University of Toronto, Scarborough Campus Physical & Environmental Sciences, Toronto, M1C 1A4, Canada 5Warsaw University Observatory, Al. Ujazdowskie 4, 00-478 Warszawa, Poland 6Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544, USA 7Department of Physics, University of Warwick, Gibbet Hill Road, Coventry, CV4 7AL, UK 8National Science Foundation Graduate Research Fellow The reduction of the K2's Campaign 9 (2C9) microlensing data is challenging mostly because of the very crowded field and the unstable pointing of the spacecraft. In this work, we present the first method that can extract microlensing signals from this 2C9 data product. The raw light curves and the astrometric solutions are first derived, using the techniques from Soares-Furtado et al. and Huang et al. for K2 dense field photometry. We then minimize and remove the systematic effect by performing simultaneous modeling with the microlensing signal. We also derive precise (K_p-I) vs. (V-I) color-color relations that can predict the microlensing source flux in the Kepler bandpass. By implementing the color-color relation in the light curve modeling, we show that the microlensing parameters can be better constrained. In the end, we use two example microlensing events, OGLE-2016-BLG-0980 and OGLE-2016-BLG-0940, to test our method.§ INTRODUCTIONOne challenge in Galactic microlensing observations is to derive physical parameters of the lens object/system from the observed signal. In particular, three observables are required in order to disentangle the lens (total) mass M_ł, the lens distance D_ł and the lens-source relative proper motion μ_: the event timescale t_, the angular Einstein radius θ_, and the microlensing parallax parameter π_ that quantifies the lens-source relative displacement in the unit of θ_ <cit.>.Of these three observables, π_ is usually the most crucial one. This is because, the timescale t_ is almost always measured precisely except for extremely long (t_≳300d) or short (t_≲2d) events, and θ_ is measurable in events that show planetary or binary anomalies <cit.>. For the majority of single-lens events, for which θ_ cannot be measured directly from the light curves, measurements of π_ can still put very tight (∼25%) constraints on the inferred lens mass and distance <cit.>. Simultaneous observations from at least two well-separated (∼1 AU) observatories has always been considered the best way to measure the microlensing parallax <cit.>. The microlensing observations using the Spitzer spacecraft in an Earth trailing orbit have been very successful, and yielded important results <cit.>. However, due to the small field of view, Spitzer is only able to follow up the microlensing events found from the ground, and given the routine of selecting and uploading events, Spitzer is not able to measure π_ for any events with microlensing event timescale t_≲ 4 days. This precludes Spitzer from observing events that are potentially caused by free-floating planets (FFPs), which usually have t_≲2 days <cit.>. Another significant disadvantage of the follow-up mode of Spitzer microlensing is that it is difficult to quantify the selection effect, unless it is carefully controlled in designing the experiment <cit.>.Similar to Spitzer, the Kepler space telescope <cit.> is also in heliocentric orbit and thus suitable for microlensing observations. Indeed, Campaign 9 (C9) of the two-wheeled Kepler mission (K2, ), or 2C9, is dedicated for Galactic microlensing observations <cit.>.However, the reduction of the 2C9 microlensing data is challenging. In the K2 era, the pointing of the spacecraft drifts by more than one pixel (4”) for every ∼6.5 hrs, resulting in a significant systematic effect. This can nevertheless be overcome by introducing specific techniques. For example, <cit.> pointed out that the systematics correlates with the stellar centroid positions. By using the centroids determined by the point source extractor, <cit.> can achieve a photometric precision for isolated K2 targets that are only factors of 2-3 worse than the original Kepler mission. <cit.> undertook a similar approach, but determined the centroids of target stars by matching isolated reference stars in the K2 frames to the UCAC4 catalog <cit.>.Methods have also been proposed for K2 crowded field photometry. <cit.> applied the PSF (point spread function) neighbor-subtraction method that was well established for other space telescopes to the Campaign 0 cluster stars. While they are able to achieve ∼1% photometric precision on cluster stars with Kepler magnitude K_p=18, their approach requires careful reconstruction of the K2 PSF. <cit.> performed the more traditional image subtraction technique on the similar data set, and achieved a similar photometric precision.The real challenge in reducing 2C9 microlensing data comes from the fact that the microlensing field is the most crowded in all K2 fields. In particular, the 2C9 field is a factor of ∼10 denser in stellar number density than the Campaign 0 clusters that were analyzed in <cit.> and <cit.>. As Figure 8 of <cit.> illustrates, there is on average one star brighter than I=18 on each Kepler pixel, and one star brighter than I=15.5 on a 10 pixel^2 aperture. For a comparison, the typical microlensing sources have I≈18 at the baseline.In this work, we apply the photometric methods of <cit.> and <cit.> to the 2C9 microlensing data set, and develop modeling techniques that can properly extract the microlensing signals. Short summaries of 2C9 project and the photometric methods are given in Section <ref>. The modeling techniques are presented in Section <ref>. We then apply our method to two example microlensing events in Section <ref>, and discuss our method and its implications in Section <ref>.§ 2C9 DATA REDUCTION§.§ 2C9 Description We provide here a brief summary of 2C9 observations, and recommend interested readers to <cit.> for a detailed description. 2C9 was conducted during 2016 April 22 [The nominal start date of 2C9 was 2016 April 7. However, the spacecraft entered into Emergency Mode prior to this intended start date, and the full recovery led to this actual start data.] and 2016 June 2. Unlike most of other K2 campaigns, this campaign was divided into two sub-campaigns, with April 22 to May 18 for C9a and May 22 to June 2 for C9b, and data from C9a were downloaded during the mid-campaign break (May 19–21, or 7527.4<JD-2450000<7531.0), in order to achieve total super stamp area as large as 3.7 deg^2. The five super stamps are observed on two different CCD modules, and read out in channels 30, 31, 32, 49 and 52.The sky region covered by the super stamp, centered at approximately (R.A., decl.)_2000=(17^ h57^ m, -2824), was chosen to maximize the total microlensing event rate <cit.> as well as to accommodate ground-based survey observations. See Figure 7 of <cit.> for the positions of 2C9 and the microlensing super stamp. In addition, microlensing events that were identified from the ground and expected to be observable by Kepler were added as late targets, and a postage stamp with size the square of a few hundred pixels was added for each late target. In total, 70 unique late targets were observed in this way. All observations were in the long cadence (30 minutes) mode.Together with simultaneous observations from the ground, [See Figure 9 and Table 3 of <cit.> for the ground-based observing resources concurrent with 2C9.] 2C9 is expected to yield microlensing parallax measurements for nearly 200 events. Simulations show that 2C9 can detect up to 10 short timescale events, including one with finite-source effect <cit.> that will enable a direct mass measurement <cit.>. The survey mode of 2C9 also makes it easy to model the detection efficiency, and thus possible to study statistically the mass function of other extremely faint or even dark objects as well as to combine with the Spitzer microlensing sample in order to refine the Galactic distribution of planets <cit.>. §.§ Differential Photometry on 2C9 data We acquire the 2C9 target pixel files (TPFs) through NASA's Barbara A. Mikulski Archive for Space Telescopes (MAST) [<https://archive.stsci.edu/k2/>], and then stick all TPFs together into “Sparse Full Frame Images” (SFFIs). In order to ensure a proper reconstruction, these SFFIs are cross-checked with the full frame images downloaded for Campaign 9.We then perform the differential photometry on the 2C9 data following the methods described in <cit.> and <cit.>. This is done for C9a and C9b separately. We first find the astrometric solution of individual frame, by matching a large number (∼1000) of bright (R<16) stars of the UCAC4 catalog <cit.> to the point sources found in the frame. This astrometric information describes the pointing drift pattern of the spacecraft, and is used in both the photometry extraction and the light curve modeling. For each K2 module, this pointing drift pattern is defined as the X and Y positions of the center pixel at each epoch. As an example, we show the pointing pattern of channel 31 in Figure <ref>.We use thepackage <cit.> for the image subtractions. This package implements the classic method of <cit.>, and is developed for reducing images from the HATNet project <cit.>. We first select a frame that has the median drift as the astrometric reference frame. Then, we select another 20 frames that have approximately the same (offset<0.15 pixel) pointing as the astrometric reference frame, and take the median as the master photometric reference frame.We adopt the convolution kernel from <cit.> to create the difference images between individual frames and the master photometric reference frame. This kernel, which is a discrete kernel with half-size of two pixels and no spatial variation across the frame, is shown to produce the best photometry for crowded fields observed in K2 campaign 0.Finally, we extract raw light curves from the difference images and the master photometric reference image, using the centroids that are determined by the astrometry solution. This is done for 40 circular apertures with radii ranging from 2.5 to 5.5 pixels. For each target, the best aperture is chosen to be the one with the minimum scattering after the detrend procedure. The raw light curves of two selected microlensing events are shown in the upper panels of Figures <ref> and <ref>. § DETRENDING & MICROLENSING MODELING §.§ MethodologyAs earlier studies show <cit.>, the K2 raw light curves show strong systematic effects that correlate with the spacecraft motion. The detrending process is therefore performed, yielding significantly improved photometry while preserving the physical signals. In the transit case, the detrending process can be done separately from the light curve modeling, because the transit signal lasts for ≲1day. However, this is nevertheless not the case for microlensing signals, which typical last tens of days, and can not always be traced by a low order spline. The External Parameter Decorrelation (EPD) technique that is proposed by <cit.> to preserve long-term trend in the raw light curve does not work well either. In almost all cases (∼10 events including the two demonstrated in Section <ref>) that we have checked, this technique either destroys the microlensing signal or distorts the signal significantly. Instead, we perform a simultaneous modeling of the microlensing signal and the systematic effects. For a given 2C9 light curve segment j, the total flux at time t_i is modeled asF^(t_i) = F_^·(A_i^-1) + c_0^j + c_1^j F^,j_i + c_2^jσ(F^,j_i)+ ∑_k=1^N_[ c_k,1^jsin(2π k X_i) + c_k,2^jcos(2π k X_i)+ c_k,3sin(2π k Y_i) + c_k,4cos(2π k Y_i) ].Here F_^ kep is the unmagnified flux of the microlensing source in the Kepler band, A_i^ kep is the source brightness magnification as seen by Kepler at given t_i, {c_0,c_1,c_2,c_k,1,c_k,2,c_k,3,c_k,4} are the detrending parameters, F^ bkg_i and σ(F^ bkg_i) are the background flux (i.e., blend flux in microlensing term) and its uncertainty, and X_i and Y_i are the relative drift of the module at t_i along two directions, respectively. Unlike F_^ kep, A_i and {c_0,c_1,c_2,c_k,1,c_k,2,c_k,3,c_k,4}, parameters F^ bkg_i, σ(F^ bkg_i), X_i, and Y_i are known for any given t_i based on the photometry and astrometry results from Section <ref>. The sin and cos terms (a.k.a., detrending terms) in Equation (<ref>) account for the inter-pixel and intra-pixel sensitivity variations. A larger N_ is in general more efficient at removing trends that are shorter than 6.5 hrs, which in our case are essentially systematic. However, including high-order terms also has the risk of over-fitting short-term trends while under-fitting the long-term trend. For our data, we find N_≤5 is reasonable, and suggest to use the upper bound unless it fails catastrophically. Nevertheless, the resulting microlensing parameters are not very sensitive to the choice of N_. See Section <ref> for example events and relevant discussions. It is crucial to include ground-based data in the microlensing modeling, in order to yield better constrained as well as self-consistent microlensing parameters. This is because, although the microlensing parallax can be estimated via the differences in event peak time t_0 and impact parameter u_0 as seen from the satellite and the ground <cit.>π_≈(Δ t_0/t_,Δ u_0) ,an accurate and self-consistent solution has to be derived by simultaneously modeling the space-based and ground-based light curves and taking into account the geocentric motion of the satellite [This can be acquired from the JPL Horizons website: <http://ssd.jpl.nasa.gov/?horizons>. Note that the satellite-Earth relative motion can be ignored only if this relative velocity is considerably small compared to the lens-source projected velocity. See <cit.> for a detailed discussion.]. Therefore, we choose to model all the variable terms simultaneously, including the microlensing signal in the ground-based data, the microlensing signal in 2C9 data, and the systematics in 2C9 data. Another reason for doing so is that, the detrending of the 2C9 light curve can be model-dependent, especially for events with partial coverage and/or long timescales. In such cases, inconsistent results may appear if one models the 2C9 light curve solely and them measures π_ jointly with ground-based data. For the ground-based data sets, the total flux at time t_i for data set j is modeled byF^j (t_i) = F_^j · A_i^j + F_ B^j ,where F_ is the source flux at baseline. The parameter F_ B accounts for the flux that is within the aperture but does not participate in the event. Unlike the case of 2C9 (Equation <ref>), the blending flux remains constant throughout all observations. Different ground-based data sets are assigned with different flux parameters F_ and F_ B.There have been many methods and algorithms proposed to compute the microlensing magnification A for given microlensing parameters, for single-lens events <cit.> and multiple-lens events <cit.>. For the purpose of this paper, we only consider the relatively simple single-lens events. The magnification A can be computed from a set of microlensing parameters {t_0,u_0,t_,ρ,,}: t_0 is the peak of the event as seen from the ground, u_0 is the impact parameter normalized to the angular Einstein radius θ_, t_ is the event timescale, ρ is the source angular radius normalized to θ_, andandare components of the microlensing parallax vector π_ along the east and north direction, respectively. Note that parameters t_0, u_0 and t_ are defined in the geocentric frame <cit.>.To summarize the degrees of freedom in our model, we have in total tens of free parameters: a set of microlensing parameters, which are {t_0,u_0,t_,ρ,,} in the single-lens case, [Note that for the majority of single-lens events, there is no the finite-source effect to constrain ρ <cit.>, and therefore ρ is usually a fixed parameter.] shared by all data sets; different flux parameters F_ and F_ B for different ground-based data sets; one common source flux parameter F_^ kep for all 2C9 light curve segments; and different detrending parameters {c_0,c_1,c_2,c_k,1,c_k,2,c_k,3,c_k,4} for the three different 2C9 light curve segments. We employ the Markov Chain Monte Carlo (MCMC) analysis to search for the best solution and estimate the uncertainties of parameters. In this MCMC analysis, we only have the microlensing parameters and F_^ kep as free chain parameters, and derive the best values of all other parameters analytically via maximum likelihood estimation <cit.>. This has negligible effects on the best-fit solution as well as the uncertainties of MCMC parameters, [This is strictly true if the posterior distributions of all parameters (microlensing & detrending parameters) are perfectly Gaussians. We have tested with examples that this assumption is valid under our parameterization.] but can significantly reduce the dimensions of the parameter space for MCMC searches. §.§ Microlensing Source Color-color Relation (K_p-I) vs. (V-I)We derive the relation between K_p-I and V-I that applies to the source stars of 2C9 microlensing events. Here K_p is the stellar magnitude in the Kepler bandpass. Such a color-color relation is important for multiple reasons. Technically, given the high noise level in 2C9 data and the complex model we use, we would like to assess our results by comparing the derived K_p-I colors with the expected values from this color-color relation. Scientifically, knowing the source flux in Kepler bandpass can be crucial for constraining the parallax parameters for events that are extremely faint and/or do not peak within the 2C9 window. This has been demonstrated in ground-based observations <cit.> and the Spitzer microlensing campaigns <cit.>. It can also play a crucial role in breaking in the general four-fold degeneracy in the single-satellite experiment <cit.>.In previous studies, the color-color relations were always derived based on nearby (<2') isolated stars. However, such an approach is not applicable to 2C9 microlensing simply because there are not enough nearby stars that are isolated.Instead, we derive a theoretical color-color relation between K_p-I and V-I, using the synthetic stellar spectra and known bandpass transmission functions. We choose V and I mostly because they are the primary band passes used in ground-based microlensing surveys. The advantage of this choice is that the combination of V and I can essentially cover the broad K_p bandpass, making the derived color-color relation less sensitive to the details of the stellar spectrum and the interstellar extinction.In principle, one can also derive this color-color relation by using stars that have been observed by Kepler, for example, stars in the prime Kepler field. Indeed, by starting from the Kepler Input Catalog <cit.> and using the filter transformations between (V,I) and (g,r,i,z) given in <cit.>, we find the following linear relation between K_p-I and V-I for 0.5<V-I<1.5,K_p-I = 0.729(V-I) + 0.044 .However, this relation may not apply to the microlensing sources in 2C9 for several reasons. First, the Kepler stars are preferentially solar-like (FGK main-sequence) stars, but the microlensing sources are generally evolved stars with possibly later stellar types. Second, the interstellar extinction is very different for these two stellar samples. Such a difference appears as both the total amount of extinction and the form of the extinction curve.We therefore want to derive a color-color relation that better applies to microlensing stars. We start from the PHOENIX synthetic stellar spectra <cit.>, and use the {K_p,V,I} wavelength response functions <cit.> to compute the stellar flux in each bandpass. This routine is applied to various stars with effective temperature between 3000 K and 8000 K, surface gravity logg=(2,3,4), and metallicity [Fe/H]=(-2.0,0.0,+0.5). Then for chosen extinction parameters A_I and R_I, the K_p-I and V-I colors can be derived. We show in Figure <ref> all the synthetic stellar colors for typical extinction parameters, R_I=(1.0,1.3,1.5) and A_I=(1,2,3). Because for bulge stars, the extinction parameters can be known from red clump stars <cit.>, the K_p-I vs. V-I color-color relations are provided for different combinations of R_I and A_I, and are illustrated in Figure <ref>.As Figure <ref> illustrates, the linear relation we derived from Kepler stars (Equation <ref>) only apply to stars with V-I≲1.5. For larger V-I values, the K_p-I color is better described as a quadratic function of V-IK_p-I = a_2(V-I)^2+a_1(V-I)+a_0 .These coefficients are given in Figure <ref> for different combinations of R_I and A_I. As shown in the residual plots of Figure <ref>, these quadratic color-color relations can predict the K_p-I color to within ∼0.02 mag for a broad range of V-I colors. The remaining uncertainty is primarily due to metallicity effect. Such a precision is more than adequate for our purpose, because the source V-I color measured from the ground-based data has a typical uncertainty ≳0.05 mag. § APPLICATIONS TO KNOWN MICROLENSING EVENTSWe use two microlensing events to demonstrate the ability of our method in measuring the microlensing parallax parameter. These events were found and observed by the fourth phase of the Optical Gravitational Lensing Experiment (OGLE-IV, ) using its 1.3 m Warsaw Telescope at the Las Campanas Observatory in Chile. These events do not show any anomalous features due to lens companions, based on the reasonably good coverage from OGLE. [It is still possible that there can be anomalous features in 2C9 data, because the space-based and ground-based light curves are sensitive to different regions in the planetary parameter space. See <cit.> for an example. However, this is unlikely in most cases, based on various other constraints. See <cit.> and <cit.> for cases of OGLE-2015-BLG-0961 and OGLE-2015-BLG-1482, respectively.] Therefore, the microlensing model of these events only involves single-lens parameters {t_0,u_0,t_,ρ,,}.For each event, the 2C9 light curve that has the minimum scattering after detrending is used. We model the space-based and ground-based data simultaneously following the method in Section <ref>. The modeling is performed for different values of N_, for purposes to check the stability of our method and the consistency with expectations. For the latter, we specifically compare the K_p-I color that is given by the best-fit model and the one that is predicted by the color-color relation. The calibration of the OGLE-IV photometric data to the standard system is then required, and this is done by following the method of <cit.>, which is accurate to 0.02 mag <cit.>. For the Kepler data, we use zero point ZP=25 to convert between flux values and K_p magnitudes. §.§ OGLE-2016-BLG-0980: A Faint Event with Quiet Background llll Best-fit parameters and associated uncertainties of different fits for event OGLE-2016-BLG-0980. For the last one, we require (K_p-I)_=1.05±0.04 and allow for a 3-σ range.Parameters OGLE-only OGLE+K2 OGLE+K2(color constraint) t_0-2450000 7556.976(6) 7556.979(5) 7556.979(5) u_0 0.0591(22) 0.0616(15) 0.0608(15) t_ (days) 21.0(5) 20.5(5) 20.6(4)0.114(14) 0.116(8)0.056(9) 0.055(6) (K_p-I)_ 1.15(11) 1.04(5) Event OGLE-2016-BLG-0980 [This is the same event as OGLE-2016-BLG-1020. See the OGLE event pages at <http://ogle.astrouw.edu.pl/ogle4/ews/2016/blg-0980.html> and <http://ogle.astrouw.edu.pl/ogle4/ews/2016/blg-1020.html>.] has a typical timescale (t_=21days) and a relatively faint source (I_=19.004±0.036) that has a typical color [(V-I)_=1.425±0.036]. According to <cit.>, the extinction parameters toward this event are found to be A_I=0.90 and R_I=1.23. With these extinction parameters and by interpolating the (K_p-I) vs. (V-I) color-color relations given in Figure <ref>, we expect the source K_p-I color to be 1.05±0.04.We model the 2C9 data and OGLE data simultaneously following the method given in Section <ref>, and provide the parameters with associated uncertainties in Table <ref>. As a demonstration of the method rather than a detailed characterization of the event, we only focus on one out of the four degenerate solutions. Figure <ref> shows the raw 2C9 light curve of OGLE-2016-BLG-0980. The microlensing signal is not noticeable in this raw light curve. However, after applying the best-fit detrend-only model (i.e., a model given by Equation <ref> but without the first term), we find a significant trend centered at JD'≡JD-2450000=7556 in the residuals, which is close to the expected peak of the microlensing event. Then we simultaneously model the detrending terms and the microlensing signal (i.e., the model given by Equation <ref>). We refer this model as the “detrend+μlensing model”, and assign the uncertainty to individual data point such that the χ^2 per degree of freedom is unity. We re-scale the χ^2 from the detrend-only model with this normalized uncertainty, and find the detrend+μlensing model can fit the data better than the detrend-only model by Δχ^2=560. [We refer to the fit with N_=5.] As a demonstration of the final light curve product, we show in Figure <ref> the space-based and ground-based light curves, in which we only preserve the microlensing signal in the 2C9 raw data.The best-fit parameters as well as the associated uncertainties are derived for different values of N_. The results are shown in Figure <ref> in the (K_p-I) vs.plane, along with the (K_p-I) value that is predicted from the color-color relation. In each case, we also report the photometric precision, which is defined as the ratio between the level of scattering (root-mean-square, or rms) in residuals and the median flux.As shown in Figure <ref>, the resulting constraints onand K_p-I are fairly stable regardless of the choice of N_. The derived (K_p-I) colors also agree with the expected value reasonably well. The total flux (target+background) is equivalent to a K_p=15 point source, and our method can lead to a photometric precision of 1.8%. For a comparison, the microlensing source has K_p=20.It is also suggested by this event that, for the source (K_p-I) color, the derived uncertainty (under no constraint) is usually larger than the uncertainty inferred from the color-color relation. Therefore, by imposing a prior on the source (K_p-I) color, we should be able to improve the final constraint on , given the strong correlation between these two. For example, we use N_=5 and allow (K_p-I) to vary within the 3-σ range of the expected value, we are able to reduce the uncertainty onby 40%. Although this does not seem particularly important for the current event, the color constraint can play a crucial role in constraining the parallax parameters for events with extremely faint baseline or events with partial light curve coverages <cit.>. §.§ OGLE-2016-BLG-0940: A Bright Event with Noisy Background llll Best-fit parameters and associated uncertainties of different fits for event OGLE-2016-BLG-0940. For the last one, we require (K_p-I)_=1.84±0.05 and allow for a 3-σ range.Parameters OGLE-only OGLE+K2 OGLE+K2(color constraint) t_0-2450000 7556.855(13) 7556.858(14) 7556.854(13) u_0 0.434(11) 0.431(6) 0.428(6) t_ (days) 11.43(18) 11.48(11) 11.52(10)-0.119(37) -0.059(24)0.106(24) 0.098(18) (K_p-I)_ 2.14(14) 1.90(6) Event OGLE-2016-BLG-0940 [<http://ogle.astrouw.edu.pl/ogle4/ews/2016/blg-0940.html>.] is relatively bright (I_=17.3), and has a timescale t_=11 days. Compared to the previous one, this event has brighter and noisier background in 2C9 due to several bright neighboring stars. This event also suffers from more extinction, with R_I=1.23 and A_I=2.6 <cit.>, and thus the source star appears redder. With (V-I)_=3.14±0.06, we expect that the source has (K_p-I)_=1.84±0.05.The raw 2C9 light curve of OGLE-2016-BLG-0940 is shown in Figure <ref>. Similar to the previous example, the (distorted) microlensing signal can only be marginally seen after the detrend-only model is applied. A simultaneous modeling of the systematics and the microlensing signal leads to the improvement of the fit by Δχ^2=250. The cleaned 2C9 light curve is shown in Figure <ref>, together with the OGLE-IV light curve.We also derive the best-fit parameters and their associated uncertainties for different choices of N_, and show the results in terms of constraints in the (K_p-I) vs.plane in Figure <ref>. Compared to event OGLE-2016-BLG-0980, the uncertainties we derive for OGLE-2016-BLG-0940 on K_p-I are larger, even though the current event has a brighter source. This is because of the stronger systematic effect in the present event.Nevertheless, models with N_≥3 give results that are consistent with each other, and the derived K_p-I colors are also consistent with the value predicted by the color-color relation within 2-σ level, as seen in Figure <ref>. Once the color constraint is imposed, the uncertainties onand (K_p-I) color are reduced significantly, and the new measurements are consistent within 2-σ of the ones without color constraint. See Table <ref> for the best-fit parameters and associated uncertainties of the modelings with only OGLE data, OGLE+2C9, and OGLE+2C9 together with (K_p-I) constrained by the color-color relation.Because of the brighter aperture, the photometric precision we can achieve for this event, 0.4%, is better than that we obtain for OGLE-2016-BLG-0980, but the scattering in the residuals remains at the similar level (130 compared to 120 for OGLE-2016-BLG-0980, both in the Kepler flux unit).§ DISCUSSIONIn this work, we present the method that can be used to extract microlensing signals from the K2 Campaign 9 data for known events. Our method relies on the photometric reduction technique that is introduced in <cit.>, and the key component in this technique is the derivation of global astrometric solutions of individual K2 frames <cit.>. We then introduce the method to measure the microlensing parameters in the raw 2C9 light curve, which is to fit the systematic trend and the microlensing signal simultaneously. We also provide analyses of two known OGLE-IV microlensing events as applications of this method.We derive the K_p-I color vs. V-I color relation based on synthetic stellar spectra and (K_p,V,I) transmission curves. With this color-color relation, we are able to predict the microlensing source flux in the K_p bandpass to ≲3%, based on the known source V-I color from ground-based observations. We use the predicted K_p-I color to validate the result of the light curve modeling, and find reasonable agreements between these two for both example events. Furthermore, given that the K_p-I vs. V-I color-color relation is fairly precise and invariant to the properties of stars, and also that the K2 data are affected by strong systematics, we suggest that this relation should be applied to improve the light curve modeling whenever possible.For an estimation of the signal-to-noise ratio (S/N) of the microlensing signal, we useS/N = α(A_max-1)F_^/σ_⟨ F_ tot^⟩√(t_/T_ int) ,where A_max is the maximum magnification as seen by Kepler, T_ int≡30min is the integration time of K2 observations, and ⟨ F_ tot^⟩ is the averaged total flux within the K2 aperture. For single-lens events, we find the coefficient α≈3.5 based on the two events in Section <ref>. In addition, we find that for the (microlensing and non-microlensing) targets we examined, the product σ_⟨ F_ tot^⟩, which is the level of scattering in the cleaned data, is ∼150. This is equivalent to a single measurement with 100% uncertainty on a K_p=19.6 star, or a typical deblended microlensing source [I_=18 and (V-I)_=2.5]. Equation <ref> is useful for assessing the detectability of a given event, as well as estimating the completeness of our method in detecting microlensing signals.The photometric technique that is used in this work uses a large number (∼1000) of bright point-like sources to derive the astrometric solutions, and thus is not directly applicable to targets outside the super stamp region. Therefore, additional techniques <cit.> are required in order to extract the raw light curves of ∼70 2C9 late targets <cit.> and 33 microlensing events in the K2 Campaign 11. Nevertheless, our method for interpreting the K2 light curves is applicable to microlensing events both inside and outside the super stamp region.Although the events we analyzed in this paper do not have the finite-source effect, it is common for events that show planetary or binary anomalies <cit.>. The characteristic timescale of the finite-source effect is the source radius crossing timet_⋆ = θ_⋆/μ_≈ 60  min(θ_⋆/0.6 ) (μ_/5)^-1 .It is comparable to the integration time of K2 long cadence observations, especially for events with lenses in the galactic disk. This has to be taken into account when the finite-source effect is present.We thank Andy Gould and Scott Gaudi for discussions. We also thank the anonymous referee for useful comments which helped to improve the manuscript. Work by W.Z. was supported by US NSF grant AST-1516842.R.P. acknowledges support from K2 Guest Observer program under NASA grant NNX17AF72G. This paper includes data collected by the Kepler mission. Funding for the Kepler mission is provided by the NASA Science Mission directorate. Some of the data presented in this paper were obtained from the Mikulski Archive for Space Telescopes (MAST). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Support for MAST for non-HST data is provided by the NASA Office of Space Science via grant NNX09AF08G and by other grants and contracts. OGLE project has received funding from the National Science Centre, Poland, grant MAESTRO 2014/14/A/ST9/00121 to AU. [Alard & Lupton(1998)]AlardLupton:1998 Alard, C., & Lupton, R. H. 1998, , 503, 325[Bakos et al.(2004)]Bakos:2004 Bakos, G., Noyes, R. W., Kovács, G., et al. 2004, , 116, 266[Bessell(2005)]Bessell:2005 Bessell, M. S. 2005, , 43, 293[Blanton & Roweis(2007)]Blanton:2007 Blanton, M. R., & Roweis, S. 2007, , 133, 734[Borucki et al.(2010)]Borucki:2010 Borucki, W. J., Koch, D., Basri, G., et al. 2010, Science, 327, 977[Bozza(2010)]Bozza:2010 Bozza, V. 2010, , 408, 2188[Brown et al.(2011)]Brown:2011 Brown, T. M., Latham, D. W., Everett, M. E., & Esquerdo, G. A. 2011, , 142, 112[Calchi Novati et al.(2015)]SCN:2015 Calchi Novati, S., Gould, A., Udalski, A., et al. 2015, , 804, 20[Chung et al.(2017)]Chung:2017 Chung, S.-J., Zhu, W., Udalski, A., et al. 2017, , 838, 154[Dong et al.(2006)]Dong:2006 Dong, S., DePoy, D. L., Gaudi, B. S., et al. 2006, , 642, 842[Dong et al.(2007)]Dong:2007 Dong, S., Udalski, A., Gould, A., et al. 2007, , 664, 862[Gould(1994b)]Gould:1994 Gould, A. 1994, , 421, L75[Gould(2000)]Gould:2000 Gould, A. 2000, , 542, 785[Gould(2004)]Gould:2004 Gould, A. 2004, , 606, 319[Gould & Horne(2013)]GouldHorne:2013 Gould, A., & Horne, K. 2013, , 779, L28[Han & Gould(1995)]HanGould:1995 Han, C., & Gould, A. 1995, , 447, 53[Henderson et al.(2016)]Henderson:2016 Henderson, C. B., Poleski, R., Penny, M., et al. 2016, , 128, 124401[Howell et al.(2014)]Howell:2014 Howell, S. B., Sobeck, C., Haas, M., et al. 2014, , 126, 398[Huang et al.(2015)]Huang:2015 Huang, C. X., Penev, K., Hartman, J. D., et al. 2015, , 454, 4159[Husser et al.(2013)]Husser:2013 Husser, T.-O., Wende-von Berg, S., Dreizler, S., et al. 2013, , 553, A6[Ivezić et al.(2014)]Ivezic:2014 Ivezić, Ž., Connelly, A. J., VanderPlas, J. T., & Gray, A. 2014, Statistics, Data Mining, and Machine Learning in Astronomy, by Z. Ivencić et al. Princeton, NJ: Princeton University Press, 2014, [Libralato et al.(2016)]Libralato:2016 Libralato, M., Bedin, L. R., Nardiello, D., & Piotto, G. 2016, , 456, 1137[Nataf et al.(2013)]Nataf:2013 Nataf, D. M., Gould, A., Fouqué, P., et al. 2013, , 769, 88[Pál(2012)]Pal:2012 Pál, A. 2012, , 421, 1825[Pál et al.(2016)]Pal:2016 Pál, A., Kiss, C., Müller, T. G., et al. 2016, , 151, 117[Penny et al.(2016)]Penny:2016 Penny, M. T., Rattenbury, N. J., Gaudi, B. S., & Kerins, E. 2016, arXiv:1605.01059[Poleski(2016)]Poleski:2016 Poleski, R. 2016, , 455, 3656[Poleski et al.(2016)]Poleski:2016b Poleski, R., Zhu, W., Christie, G. W., et al. 2016, , 823, 63[Refsdal(1966)]Refsdal:1966 Refsdal, S. 1966, , 134, 315[Shvartzvald et al.(2017)]Shvartzvald:2017 Shvartzvald, Y., Yee, J. C., Calchi Novati, S., et al. 2017, , 840, 3 [Soares-Furtado et al.(2017)]MSF:2017 Soares-Furtado, M., Hartman, J. D., Bakos, G. Á., et al. 2017, , 129, 044501[Sumi et al.(2011)]Sumi:2011 Sumi, T., Kamiya, K., Bennett, D. P., et al. 2011, , 473, 349[Suzuki et al.(2016)]Suzuki:2016 Suzuki, D., Bennett, D. P., Sumi, T., et al. 2016, , 833, 145[Szymański et al.(2011)]Szymanski:2011 Szymański, M. K., Udalski, A., Soszyński, I., et al. 2011, , 61, 83[Udalski(2003)]Udalski:2003 Udalski, A. 2003, , 53, 291[Udalski et al.(2008)]Udalski:2008 Udalski, A., Szymanski, M. K., Soszynski, I., & Poleski, R. 2008, , 58, 69[Udalski et al.(2015a)]Udalski:2015a Udalski, A., Yee, J. C., Gould, A., et al. 2015, , 799, 237[Udalski et al.(2015b)]Udalski:2015b Udalski, A., Szymański, M. K., & Szymański, G. 2015, , 65, 1 [Van Cleve & Caldwell (2009)]Handbook Van Cleve, J. E., & Caldewell, D. A., Kepler Instrument Handbook, KSCI-19033-002 [Vanderburg & Johnson(2014)]Vanderburg:2014 Vanderburg, A., & Johnson, J. A. 2014, , 126, 948[Witt & Mao(1994)]WittMao:1994 Witt, H. J., & Mao, S. 1994, , 430, 505[Yee et al.(2012)]Yee:2012 Yee, J. C., Shvartzvald, Y., Gal-Yam, A., et al. 2012, , 755, 102[Yee et al.(2015a)]Yee:2015a Yee, J. C., Udalski, A., Calchi Novati, S., et al. 2015, , 802, 76[Yee et al.(2015b)]Yee:2015b Yee, J. C., Gould, A., Beichman, C., et al. 2015, , 810, 155[Yoo et al.(2004)]Yoo:2004 Yoo, J., DePoy, D. L., Gal-Yam, A., et al. 2004, , 603, 139[Zacharias et al.(2013)]Zacharias:2013 Zacharias, N., Finch, C. T., Girard, T. M., et al. 2013, , 145, 44[Zhu & Gould(2016)]ZhuGould:2016 Zhu, W., & Gould, A. 2016, Journal of Korean Astronomical Society, 49, 93[Zhu et al.(2014)]Zhu:2014 Zhu, W., Penny, M., Mao, S., Gould, A., & Gendron, R. 2014, , 788, 73[Zhu et al.(2015a)]Zhu:2015 Zhu, W., Udalski, A., Gould, A., et al. 2015, , 805, 8[Zhu et al.(2016)]Zhu:2016 Zhu, W., Calchi Novati, S., Gould, A., et al. 2016, , 825, 60[Zhu et al.(2017)]Zhu:2017 Zhu, W., Udalski, A., Calchi Novati, S., et al. 2017, arXiv:1701.05191
http://arxiv.org/abs/1704.08692v2
{ "authors": [ "Wei Zhu", "Chelsea Huang", "A. Udalski", "M. Soares-Furtado", "R. Poleski", "J. Skowron", "P. Mróz", "M. K. Szymański", "I. Soszyński", "P. Pietrukowicz", "S. Kozlowski", "K. Ulaczyk", "M. Pawlak" ], "categories": [ "astro-ph.IM", "astro-ph.EP" ], "primary_category": "astro-ph.IM", "published": "20170427180001", "title": "Extracting Microlensing Signals from K2 Campaign 9" }
XXVIth International Conference on Ultrarelativistic Nucleus-Nucleus Collisions(Quark Matter 2017) © 2017. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/The fixed-target experiment at the CERN SPS studies the onset of deconfinement and searches for the critical point of strongly interacting matter by measuring hadron production as a function of the collision energy and the colliding system size. This contribution summarises recent results on hadron spectra and fluctuations, in particular new results on charged kaon production in ^7Be+^9Be collisions. Also an overview of the proposed future program of is presented.critical point onset of deconfinement CERN SPS § TWO-DIMENSIONAL SCAN PROGRAM OF THE EXPERIMENT AT CERN SPS scans the phase diagram of strongly interacting matter in baryon density and temperature. The programme is motivated by the evidence for the onset of deconfinement in Pb+Pb collisions at 30found by the NA49 experiment <cit.>. Measurements of hadron production in a two-dimensional scan in beam momentum (13A–150/158) and system size (p+p, p+Pb, ^7Be+^9Be, Ar+Sc, Xe+La and Pb+Pb) are conducted in parallel to the RHIC beam energy scan. Figure <ref> shows the data taking progress. studies the onset of deconfinement by measurements of the hadron spectra and searches for the critical point of strongly interacting matter by measuring event-by-event fluctuations.The detector is based on a system of five Time Projection Chambers providing acceptance in the full forward hemisphere, down to = 0. Time of Flight walls provide additional particle identification. A zero-degree calorimeter, Projectile Spectator Detector, allows the selection of central collisions based on the measurement of the forward energy.§ RECENT RESULTS FROM§.§ Study of the onset of deconfinement§.§.§ Negatively charged pion spectraNegatively charged pion spectra in p+p <cit.>, central Be+Be <cit.> and central Ar+Sc collisions <cit.> were derived in large acceptance from unidentified negatively charged hadron spectra using the method. Figure <ref> (left) shows the transverse mass spectra at 40, compared with the NA49 results for central Pb+Pb collisions <cit.>. The spectra are approximately exponential; a deviation from the exponential function at low and high in heavier systems indicates collective radial flow.The spectra are integrated to derive rapidity spectra, shown for central Ar+Sc collisions at six beam momenta in Fig. <ref> (right). The spectra are well described by a sum of two symmetrically displaced normal distributions of independent amplitudes. Multiplicities of pions of all charges π are calculated by integrating rapidity spectra and using phenomenological dependences between multiplicities of pions of various charges <cit.>. They are shown in Fig. <ref>, compared with results from other experiments <cit.>. The values were divided by the average number of wounded nucleons W. For higher SPS energies the slope of the energy dependence is larger for the heavy systems (Pb+Pb, Ar+Sc) than for the light ones (p+p, Be+Be). The Statistical Model of the Early Stage (SMES) predicts an increase of the slope at the onset of deconfinement due to the larger number of degrees of freedom in the quark-gluon plasma <cit.>.§.§.§ Charged hadron spectra π^±, p, (in p+p interactions) andK^± (in p+p and central Be+Be collisions) were identified based on measurements of the energy loss in the TPCs () and time of flight in the ToF detectors. Figures <ref> and <ref> present the energy dependence of the inverse slope parameter of the transverse mass distribution of charged kaons and the ratio of charged kaon to pion multiplicity at mid-rapidity, respectively. The results for p+p <cit.> and Be+Be interactions are compared to those from central Pb+Pb collisions of NA49 <cit.> and other experiments <cit.>. For Pb+Pb collisions in the SPS energy range the local plateau ("step") in the inverse slope parameter visible in Fig. <ref> and the peak ("horn") in the left panel of Fig. <ref> were predicted by the SMES as signatures of the onset of deconfinement.The results on p+p and Be+Be interactions greatly improve the quality of the available data on small systems. They also reveal rapid changes of the energy dependence in the SPS energy range suggesting that some properties of hadron production previously attributed to onset of deconfinement in heavy ion collisions are present also in p+p interactions. Interestingly while the inverse slope parameter in Be+Be collisions lies slightly above the one in p+p interactions, the values of the charged kaon to pion ratio are very close in Be+Be and p+p. §.§.§ Lambda spectrameasured Λ spectra in p+p interactions at 40 <cit.> and 158 <cit.>. Figure <ref> (left) presents the energy dependence of the ratio of total Λ to π multiplicity, compared with other results for p+p and heavy ion collisions. For Pb+Pb collisions this energy dependence shows a maximum in the SPS energy range for ion collisions but not for p+p reactions. A similar maximum is visible in the ratio of total to multiplicities, shown Fig. <ref> (right). The observations for Pb+Pb collisions are consistent with the SMES prediction on the energy dependence of the strangeness to entropy ratio at the onset of deconfinement. §.§ Search for the critical point§.§.§ Event-by-event fluctuationssearches for the critical point by searching for non-monotonic dependences in event-by-event fluctuations of hadron production properties. Results on two fluctuation measures will be presented: * Scaled variance of the multiplicity distribution ω[N] ≡ Var(N)/N, an intensive variable, insensitive to the system volume (size), but sensitive to volume fluctuations.* Σ[P_, N] measure of fluctuations of the transverse momentum and multiplicity, a strongly intensive variable, insensitive both to the system volume and its fluctuations.Comparison of results of fluctuation measurements between various experiments is challenging due to differences in acceptance, volume fluctuations and choice of measures of fluctuations. For this reason only results will be presented in this section.§.§.§ Multiplicity fluctuationsFigure <ref> (left) shows the scaled variance ω[N] of the multiplicity distributions for p+p, central Ar+Sc, and Pb+Pb collisions at 150/158calculated in the NA49 acceptance <cit.>. Only the 0.2% most central Ar+Sc collisions were used in the analysis in order to eliminate volume fluctuations. Results contradict the Wounded Nucleon Model prediction that the values for heavy systems will be greater or equal to those for p+p.Figure <ref> (right) shows the energy dependence of the scaled variance in p+p and central Ar+Sc collisions calculated in the acceptance. The Ar+Sc points lie systematically below the p+p ones and no indication for non-monotonic behaviour is visible. §.§.§ Transverse momentum and multiplicity fluctuationsFigure <ref> presents the energy and system size dependence of the Σ[P_, N] measure of transverse momentum and multiplicity fluctuations <cit.>. The results show no beam momentum dependence. The 5% increase from p+p to Ar+Sc is consistent with the estimated magnitude of the systematic uncertainty. The Independent Particle Production Model predicts Σ[P_, N] = 1, which is consistent with the presented data <cit.>.§.§.§ Higher order moments of the net-charge distribution in p+p collisionsHigher order moments of multiplicity distributions might be particularly sensitive to effects of a critical point. Figure <ref> shows the energy dependence of skewness and kurtosis of the net-charge distribution in p+p interactions <cit.>. No non-monotonic dependence is observed, but these results establish a reference for future measurements in collisions of heavier systems.§ NOW AND IN THE FUTURE §.§ in 2017–2018The two-dimensional system size and beam momentum scan will be completed with measurements of p+Pb, Xe+La and Pb+Pb collisions in 2017 and 2018. with the new small-acceptance Vertex Detector will perform pilot open charm production measurements. Moreover, precise measurements of fluctuations and collective effects in Pb+Pb collisions <cit.> will be carried out.§.§ in 2021–2024A detector upgrade of is planned during the Long Shutdown 2 at CERN in years 2019–2020: the readout speed will be increased to 1 kHz and a Large Acceptance Vertex Detector will be constructed. The upgraded detector will allow the performance of a high statistics beam momentum scan with Pb+Pb collisions for precise measurements of open charm and multi-strange hyperon production in 2021–2024. This program will complement future measurements at NICA, FAIR and J-PARCalso conducts extensive and precise particle production measurements for the neutrino physics program which are planned to be continued after 2020. § SUMMARYThis contribution discusses recent results from the energy and system size scan performed to study the onset of deconfinement and to search for the critical point. Results on particle spectra and fluctuations were presented. New charged kaon spectra in Be+Be collisions at 30A–150were shown. Surprisingly, also p+p reactions show rapid changes in particle production properties, partly resembling those seen in Pb+Pb collisions. Results from Be+Be collisions are close to those from p+p reactions. The charged kaon to pion ratios are consistent and the inverse slope parameters of the transverse mass distributions are marginally higher. Present results on fluctuations show no indication for non-monotonic dependence and thus no indication for a critical point. Still such features may be revealed in the future results on Xe+La and Pb+Pb collisions. The presently ongoing two-dimensional scan will be completed in 2018. As an extension of this program plans to measure precisely open charm and multi-strange hyperon production in 2021–2024.§ ACKNOWLEDGMENTS:ThisworkwaspartiallysupportedbytheNationalScience Centre, Poland grant 2015/18/M/ST2/00125. § REFERENCES elsarticle-num
http://arxiv.org/abs/1704.08071v3
{ "authors": [ "Antoni Aduszkiewicz" ], "categories": [ "hep-ex", "nucl-ex" ], "primary_category": "hep-ex", "published": "20170426120817", "title": "Recent results from NA61/SHINE" }
firstpage–lastpage Virtual Resonant Emission and Oscillatory Long–Range Tails inInteractions of Excited States: QED Treatment and Applications V. Debierre December 30, 2023 ===============================================================================================================================Many focal-reducer spectrographs, currently available at state-of-the art telescopes facilities, would benefit from a simple refurbishing that could increase both the resolution and spectral range in order to cope with the progressively challenging scientific requirements but, in order to make this update appealing, it should minimize the changes in the existing structure of the instrument. In the past, many authors proposed solutions based on stacking subsequently layers of dispersive elements and record multiple spectra in one shot (multiplexing). Although this idea is promising, it brings several drawbacks and complexities that prevent the straightforward integration of a such device in a spectrograph. Fortunately nowadays, the situation has changed dramatically thanks to the successful experience achieved through photopolymeric holographic films, used to fabricate common Volume Phase Holographic Gratings (VPHGs). Thanks to the various advantages made available by these materials in this context, we propose an innovative solution to design a stacked multiplexed VPHGs that is able to secure efficiently different spectra in a single shot. This allows to increase resolution and spectral range enabling astronomers to greatly economize their awarded time at the telescope. In this paper, we demonstrate the applicability of our solution, both in terms of expected performance and feasibility, supposing the upgrade of the Gran Telescopio CANARIAS (GTC) Optical System for Imaging and low-Intermediate-Resolution Integrated Spectroscopy (OSIRIS).instrumentation: spectrographs - techniques: spectroscopic - telescopes - methods: observational§ INTRODUCTIONThe current state-of-the-art spectroscopic facilities could be divided in two main groups depending on the resolution. The first one is characterized by a low resolution (R < 2000), particularly suitable for multi-object spectroscopy or Integral Field Unit (IFU). Among them we can find examples like ESO VIMOS <cit.>, FORS1-2 <cit.>, MOSFIRE <cit.>, the FOSC at ESO-NTT <cit.> or OSIRIS at GTC <cit.>. The second one is featured by a high resolution (R >> 4000), which is guaranteed through diffractive elements like echelle or echellette grating. Among this group, successfully examples are HIRES at Keck <cit.>, ESO UVES <cit.>, CRIRES <cit.> or the most recent ESO X-SHOOTER <cit.>. The resolution plays a key role in the era of 10 m class telescopes since the only way to increase the sensitivity of a spectrum, in terms of detectable features and accuracy, is to increase it as much as possible while maintaining a good signal-to-noise ratio (SNR) over a wide spectral range. Current focal reducer spectrographs, like the GTC-OSIRIS, already provide diffraction gratings that allow to secure spectra with R ≥ 2000 but, unfortunately, their spectral range is very narrow. Therefore, to obtain a spectrum from 4000 to 10000 Å, it would require to observe the same source multiple times, thus wasting an enormous amount of awarded time.On the other hand, deciding to operate in alternative facilities based on echelle gratings (e.g. ESO-XSHOOTER or Keck-ESI), a scientist could secure spectra with wide wavelength coverage at high resolution (R > 4000). Nevertheless, spectrographs like GTC-OSIRIS or EFOSC are much simpler, widely diffused in a large number of optical telescopes and with a flexible design that includes imaging capabilities. Therefore, an improvement of these facilities that allow to fill the gap between the two layouts is really attractive.In most astrophysical topics, the increase of the resolving power of secured spectra permits to achieve many scientific advantages. For example, the use of ESO-XSHOOTER spectrograph produced a vast number of spectra of QSOs at R > 5000, allowing to better understand the physical state and chemical composition of the IGM <cit.>.Another interesting topic tackled with the same instrument is the determination of the redshift of BL Lac objects. These sources are active nuclei of massive elliptical galaxies whose emission is dominated by a strong non-thermal continuum <cit.> that prevents the determination of their distance that is, however, mandatory for constraining models of their emission or to better understand absorption of hard γ-rays by Extragalactic Background Light (EBL). In fact for example, <cit.> demonstrated that this problem could be mitigated by securing spectra with high SNR and increased resolution to detect fainter spectral features, necessary to estimate the redshift (and thus the distance) of the source.As already pointed out, it is possible to collect spectra of astrophysical sources at high resolution (R > 4000) with focal reducer instruments but the price to pay is a limited spectral range. A clever solution could be the substitution of the single high R element, whose range is limited, with a new dispersive device capable to increase spectral coverage, simultaneously recording multiple high R spectra in different ranges (multiplexing). Being able to deliver such dispersive element will achieve a great improvement of the expected throughput of already commissioned spectrographs. In this paper, we focus on the GTC-OSIRIS as a candidate and demonstrator for housing our innovative device. In particular, we report on two different cases. The first one allows to secure two spectra in a single shot increasing the resolution by a factor of ∼ 2 while the second challenges instruments like X-SHOOTER combining three high resolution spectral regions simultaneously. This work is organized as follows: in Section <ref> we report on the theoretical background and the design principle of these multiplexed devices. In Section <ref> we discuss the expected performances on the sky through simulations, while in Section <ref> we give our conclusions. § GRATING MULTIPLEXING – THEORY AND APPLICABILITYAs highlighted in the introduction, being able to simultaneously record multiple spectra of wavelength ranges, or alternatively, to have a high resolution element that covers a very wide spectral range, brings a huge advantage to the astronomical community.In particular, depending on the optical layout, a spectrograph would benefit of the possibility toincrease the resolution or the spectral range, maintaining the same exposure times. Otherwise, a typical focal reducer imager, like FORS or OSIRIS, would benefit from the combination of very low dispersion gratings to acquire multiple snapshots of the same field in different bands simultaneously, as depicted in the cartoon of Figure <ref>.In the present study we have tried to answer to these needs, testing the feasibility of a new type of dispersive element, that can result in a huge technological boost for those instruments that are becoming obsolete and for the new ones that are yet to be built.The general idea is to place multiple gratings (multiplexed), stacked subsequently, in a way that they will produce simultaneously spectra of different wavelength regions. The basic concept of a transmissive element is sketched in Figure <ref>. Each spectrum in the instrument's detector is designed to cover a specific wavelength range, according to the scientific case that has to be studied. Consequently, the design phase is indeed a crucial part in the definition of the characteristics of the multiplexed dispersive element. Moreover, strategies to separate thespectra avoiding their overlapping should be considered.In this particular configuration, since the grating layers are superimposed, the key idea is to apply a small rotation along optical axis (ε in Figure <ref>) between the layers, in order to separate (along the y direction) the different spectra appearing on the detector. Being able to secure multiple spectra with one exposure, the analyzed spectral range is extended (maintaining the same resolution R) or the resolution of the system in the same spectral range is significantly increased. This system is therefore suitable for upgrading an already built instrument, giving a great enhancement by a simple replacement of the dispersive element, which preserves all the existing abilities (e.g. the imaging in FOSC).A GRISM is a combination of a prism and grating arranged so that light at a chosen central wavelength passes straight through. The advantage of this arrangement is that the same camera can be used both for imaging (without the grism) and spectroscopy (with the grism) without having to be moved. Grisms are inserted into a camera beam that is already collimated. They then create a dispersed spectrum centered on the object's location in the camera's field of view.The type of dispersive element that we have considered in this study is the transmissive Volume Phase Holographic Grating, VPHG <cit.>. They consist in a periodic modulation of the refractive index (Δn) in a thin layer of a photosensitive material. These elements represent today the most used dispersive elements in astronomy and yet the element whose performances are most difficultly surpassed in both low and medium resolution spectrographs <cit.>.Since many different VPHGs are usually integrated inside astronomical spectrographs and each of them is a custom designed grating, each astronomical observation can take advantage of specific dispersive elements with features tailored for achieving the best performances. Accordingly, the design and manufacturing of highly efficient and reliable VPHGs require photosensitive materials where it is possible to control both the refractive index modulation and the film thickness d, in order to tune the device's efficiency. Regarding the holographic materials, up to now Dichromated Gelatins (DCG) is considered the reference material thanks to the very large modulation of the refractive index that can be stored <cit.>, which turns into relative large bandwidth in high dispersion gratings. Unfortunately this material requires a complex chemical developing process making it difficult for large scale and large size production. Moreover, the material is sensitive to humidity, therefore, it is necessary to cover the grating with a second substrate, burdening the control of the wavefront error. The availability of holographic materials with similar performances, but with self-developing properties is desirable, because they will not require any chemical post-process and moreover, the Δn formation could be monitored and set during the writing step. Photopolymers are a promising class of holographic materials and today, they are probably the best alternative to DCG, thanks to the improved features in terms of refractive index modulation, thickness control and dimension stability <cit.>. A lot of studies have been carried out to understand deeply the behavior of this class of materials. Moreover, the formation of the refractive index modulation has been recently studied <cit.>, through the development of models that predict the trends as function of the material properties and writing conditions <cit.>.We already demonstrated in other papers the use of photopolymers for making astronomical VPHGs with performances comparable to those provided by VPHGs based on DCGs and good aging performances <cit.>, but with a much simpler production process <cit.>. Therefore, we think that the big advantages of this novel holographic material could be the key point to realize the multiplexed dispersive element.The newly <cit.> developed photopolymer film technology (Bayfol HX^ film) evolved from efforts in holographic data storage (HDS) <cit.> where any forms of post processing is unacceptable. These new instant developing recording media open up new opportunities to create diffractive optics and have proven to be able to record predictable and reproducible optical properties <cit.>. Depending on the application requirements, the photopolymer layer can be designed towards e.g. (high or low) index modulation, transparency, wavelength sensitivity (monochromatic or RGB) and thickness to match the grating's wavelength and/or angular selectivity.Since the material consists in the holographic layer coupled with a polymeric substrate with a total thickness of ca. 60 - 150 μm,it can be laminated or deposited one on top of the other after having been recorded, making straightforward the stacking realization. Clearly, another possibility is to holographically record multiple gratings inside the same layer but, as described later, in order to optimize the efficiency curves, usually very different thicknessesare required for each grating, therefore this strategy will not let us have the advantage to tune the response curves in the design process.§.§ Working principleAs stated at the beginning of this section, the design concept consists in placing a set of transmission VPHGs stacked subsequently (multiplexed) (see Figure <ref>). As highlighted in the figures, this device will form one single optical element whose dimensions are comparable to standard VPHGs already available in the target instrumentation.Some attempts have been made to explore this idea <cit.> but, although steps have been made in the right direction, the proposed solutions are limited by the necessity of a newly designed spectrograph, and do not take into consideration the crucial efficiency optimization that, without proper design, will make the device ineffective. Hence, to preserve integration simplicity, one has to mix materials, design strategies and required performances, in order to produce multiplexed dispersive elements that could be easily integrated in an available instrument. This gives to astronomers the possibility to enhance the resolution (and spectral coverage) by simply replacing the disperser already installed in the optical path.Regarding the material, thanks to the crucial capability to finely tune the refractive index modulation Δn <cit.> and the slenderness of the film containing the grating, Bayfol HX^ photopolymers by Covestro gave us the possibility to design the multiplexing element to: * realize a compact and thin device that can be integrated as replacement in many already existing instruments <cit.>;* tune the single stacked efficiency in such a precise way that they will not interfere with each other and obviate to all the problems related to the realization of these devices;* match the design requirements and obtain high efficiency;* stack multiple layers of gratings in one single device for the simultaneous acquisition of multiple spectra with a broad wavelength band. In the multiplexed device, each layer will generate a portion of the spectrum that all together will compose the total dispersed range required. Such pieces, on the detector, will be disposed one on the top of each other, resulting in a total spectral range that is far wider than the one obtainable using a single grating with a comparable dispersion. §.§ Issues in details: the geometrical effectAlthough the stack of subsequent diffraction elements brings many advantages, some constraints and critical points arise and should be discussed. The first one is purely a geometrical effect and is related to the propagation of the incoming beam throughout multiple dispersing elements that must not interfere with each other. For this reason, it has to be taken into account that the incoming beam is diffracted multiple times (since it encounters two or more dispersive elements) according to the grating equation m λΛ = sinα + sinβwith m the number of the diffracted order, Λ the line density of the VPHG and α, β the incidence and diffracted angles respectively.Fixing λ, different combinations of diffraction orders can occur, resulting in light diffracted in different directions.In particular let us consider for simplicity an example of two stacked multiplexed elements as shown in Figure <ref>. Each grating is optimized for dispersing efficiently a specific wavelength range (labelled B for blue and R for red). Let's suppose that the two possess the same line density. If a red monochromatic wavelength λ_R (case i) enters the multiplexed device, it will be firstly diffracted by the R grating in all the possible orders, that will eventually enter the second grating. Each of these beams are in turn recursively diffracted by the B grating, but only few of them possess the correct direction for further propagation (total internal reflection TIR can occur) to the detector.Otherwise, when a blue monochromatic beam λ_B is considered (case ii, for equal incidence angle) each diffracted order will possess a smaller diffraction angle β (with respect of the previous case) due to the shorter wavelength, therefore it is possible that some overlapping between blue and red orders would occur since more diffraction orders have a direction that can potentially enter the detector. §.§ Issues in details: the efficiency depletion due to subsequent gratingsIn the design of VPHGs for an astronomical spectrograph, after having satisfied the dispersion and resolution requirements, which fix parameters like the line density (Λ) of the gratings and the incidence and diffraction angles (α and β), the optimization of the diffraction efficiency (both peak efficiency and bandwidth) is necessary.To perform this task, the main parameters to be considered are: * the refractive index modulation Δn; * the active film thickness d; * the slanting angle ϕ (i.e. the angle between the normal of the grating surface and the normal of the refractive index modulation plane). Considering a sinusoidal refractive index modulation, and working in the Bragg regime (the light is sent only in one diffraction order other than the zero), the well-known Kogelnik model can be used to compute the gratings efficiency <cit.>. For small angles, large diffraction efficiency is achieved when the product Δn · d is equal to half of the wavelength and this is the starting point in the optimization process. As already stressed, during the VPHG design, not only the peak efficiency is important, but also the efficiency at the edges of the spectral range. According to the Kogelnik model, the spectral bandwidth (Δλ) of the diffraction efficiency curve is proportional to <cit.>: Δλ/λ∝α/Λ d In this equation, α is the incidence angle, Λ is the line density of the grating and it is evident that the bandwidth is inversely proportional to Λ and the thickness of the grating d. Hence, the optimization of the diffraction efficiency curves, acting on the Δn and d, provides large differences in the grating response. If a grating works in the Bragg regime, the largest peak efficiency and bandwidth is obtained for very thin films and large Δn. Undoubtedly, the Δn upper value is determined by the performances of the holographic material. If the VPHG works in the Raman-Nath regime <cit.>, it diffracts the light with a non-negligible efficiency in more than one diffraction order and this is the case of low dispersion gratings and should be considered to avoid the further explained second order contamination(see Section <ref>).For such gratings, the light diffracted in high orders is proportional to Δn^2, ergo it is better to increase the film thickness and reduce the Δn in order to achieve a large peak efficiency.The availability of an holographic material that can exploit a precise ability to tune the Δn <cit.>, is therefore crucial for the design of multiplexed elements, in order to be able to adjust the efficiency response of each dispersive layer. In the multipexing context, it is important to evaluate how a grating with a certain efficiency affects the response of the following one. In order to give a feeling of the complexity of the problem, let us reconsider the two-multiplexing device in Figure <ref>.The total multiplexed efficiency on the detector will not merely be the sum of the single layer efficiencies η_B,1st(λ, α_0), η_R,1st(λ, α_0), with λ the wavelength and α_0 the initial incidence angle.The notation "_1st" indicates the diffraction order at which the efficiency η refers to, meaning that the system is aligned to work out the 1^st order.The spectrum generated by the R grating, before reaching the detector, has to pass through the B grating, and this will eventually diminish its intensity. To complicate that, we add the fact that each wavelength of the R spectrum possess different diffraction angles β_R (which became the incidence angles for the B grating) and therefore this varies the response from the second grating, according to the grating equation (eq. <ref>). The resulting R efficiency η^*_R,1st(λ, α_0) will be then: η^*_R,1st(λ, α_0) = η_B,0th(λ, β_R) ·η_R,1st(λ, α_0) Moreover, the light that enters the second layer has already been processed by the previous gratings, therefore, its final efficiency will be the product of the leftover intensity, times the efficiency of the B grating: η^*_B,1st(λ, α_0) = η_R,0th(λ, α_0) ·η_B,1st(λ, α_0)Practically, the goal is to obtain gratings with negligible overlapping efficiencies.§.§ Issues in details: spectral range and Second Order ContaminationA critical point in spectroscopy is the contamination of recorded spectra, usually obtained through the first diffraction order, by light coming from other diffraction orders, usually the second. Since signals of the different orders are overlapped, there is no possibility to remove the unwanted light a posteriori with data reduction. Therefore, dispersive elements with spectral range greater than [ λ to 2 λ] will inevitably suffer of this problem.This issue is well known in astronomy and it is usually avoided by placing order-sorting filters coupled with the dispersing element or in a filter wheel in the optical path of the instrument. The filters serve to block the light at lower wavelength that can overlap to the acquired spectrum. Another approach is to reduce as much as possible the efficiency of the unwanted orders.Although in VPHGs, it is possible to mitigate these effect by varying parameters, such as thickness and Δn, in order to finely tune the efficiency curve, we decided to limit the wavelength range of the multiplexed device, adopting a spectral band where no contamination occurs. However, we will show another strategy that deals with the second orders in the forthcoming "Part II" of this work.In Figure <ref> we report the effects of second order contamination with a multiplexed device designed to work in an extended wavelength range [4200 - 10000 Å]. We have simulated two different spectra:the first one contaminated with photons coming from the second order, while the other one considering only the contributions from the first order. We also assumed a SNR of about 100. The ratio between the two signals (magenta line) is a rough estimation of how muchunwanted light appears as a bump in the collected data. In fact, according to this figure, the ratio between the two is within the noise level (solid black lines) up to λ≃ 9000 Å, indicating the two spectra are undistinguishable. Surpassing this wavelength the ratio is higher than the noise level, meaning that photons from the second order are superimposing in the spectrum.§ DESIGN CONCEPTS AND EXPECTED PERFORMANCES§.§ Theoretical frameworkIn order to understand the spectral behavior of a multiplexing dispersive element, we choose to study the feasibility of this system considering an astronomical instrument that could take advantage of the multiplexing technology. The resolving power of a spectrograph, R (or simply resolution) is:R = m Λλ W/χ D_Twhere W is the length of the illuminated area on the grating by the collimated beam, χ is the angular slit width (projected on the sky) and D_T is the diameter of the telescope. For a correct interpretation of the results, it has to be pondered that, the optical layout of the spectrograph (such as the ratio of telescope diameter and the collimated beam in the spectrograph) can be used as a rule of thumb to quantify the advantage of this approach. In this paper we present the case of the focal reducer OSIRIS, installed at the 10 m telescope Gran Telescopio Canarias, as candidate facility for the on-sky commissioning of the multiplexed device. We have chosen to exploit two different case studies, changing the number of elements in the multiplexed device. A two-stacked multiplexed device with an approximate resolution of R ∼ 2000, and a three-stacked multiplexed device with a resolution of about R ∼ 5000. The first one was intended to compete with observations carried out at the same facility for the determination of redshift lower limit of the TeV γ-ray BL Lacertae (BL Lac) object S4 0954+258 (see <cit.> and the next sections) while the second one is intended to be compared with medium-high state-of-the-art resolution spectrographs such as ESO-XSHOOTER <cit.>. In this section, we demonstrate the design and applicability of the two cases. For each of them, we present the analysis to determine the efficiency behavior, taking into account the dispersion and spectral range that each VPHG should show in relation to the instrument specifications. This activity is carried out both through optical ray-tracing and Rigorous Coupled Wave Analysis RCWA simulations <cit.>. The outputs of this calculation are the most suitable efficiency curves for each stack that will guarantee the higher overall diffraction efficiency and are computed varying the key parameters described in Section <ref>. After the grating design, the subsequent step is to assess, through simulations, the expected on-sky performances of each device. Thus, we build up syntethic simulated spectra (starting from powerlaw model, as in the case of BL Lac, or template spectrum as in the case of QSO)of the targets according to the expected signal-to-noise ratio (SNR) in each pixel defined as: S/N = N_*/√(N_* + N_sky + n_pix· RON^2)where N_* is the number of expected counts from the target source evaluated as: N_* = f(λ) ·∏_i=1^n ·η_i(λ) · A · t_expwhere f(λ) is the input spectrum in ph sec^-1 cm^-2 Å^-1, A is the collecting area of the telescope, η_i are the efficiencies of atmosphere transmission, telescope, spectrograph, multiplexed device and CCD (we consider a slit efficiency of about ∼ 0.80 arcsec) and t_exp is the total integration time. The quantity N_sky is evaluated in the same way considering a flat spectrum normalized in V o R band to a flux that corresponds to ∼ 21 mag arcsec^-2, which is a typical value for the La Palma sky brightness. The read-out-noise (RON) of the detector is assumed equal to 7 e-/pix. For each simulation, we consider a seeing of ∼ 0.80 with a slit width of ∼ 0.60^'' to be comparable with the current available instrumentation specifications and performances <cit.>. The plate scale of the system is assumed to be ∼ 0.30 arcsec/pix while the efficiencies of the telescope and optics of the spectrograph are derived from literature <cit.>. §.§ Two-multiplexed grating caseThis first case that we took into consideration is a two stacked multiplexed device, for OSIRIS and in GRISM mode, that can cover in one single exposure a spectral range from 4000 to 8000 Å with a resolution of approximately 2000. In order to achieve that, the dispersive element splits the wavelength range in two parts, that are imaged on the detector one on top of the other. The inter-spectra-separation depends on the tilt of the two gratings in the diffraction element. In this particular scenario, the minimum distance between the two is approximately 2' (projected angle on the sky), but merely because we have chose arbitrarily a tilt value of 2.5 (see Figure <ref>). The two dispersive elements share the same prisms and thus the same incidence angle.In Table <ref> the specifications of the gratings that have been designed are reported, while in Figure <ref> we presented the calculated efficiency curves of the layers that compose the device.With respect of this last figure, a long-pass filter at 4000 Å is installed in the device in order to avoid contamination from the second order. Moreover the VPHG, which disperses the light in the range 5500 - 8000 Å, has been designed to suppress as much as possible the contribution from the second order, which remains outside the spectral range.As highlighted in the previous sections, an important effect that has to be taken into account is that the diffracted intensity will be dimmed as light gradually passes through the VPHG layers but, in this configuration, thanks to the precise design process, this effect is minimized. Indeed, for each grating layer, a specific value of Δn, d and slanting angle ϕ was chosen in order to ensure the compatibility between the efficiency curves. In the hypothesis that the sequence is first the RED grating and second the BLUE grating, the wavelengths that are diffracted by the RED grating (dotted green in Figure <ref>), are then transmitted through the BLUE layer with the resulting efficiency plotted in solid green. On the other hand, the wavelengths that have to be diffracted by the BLUE grating, must firstly pass though the RED layer, with a resulting efficiency that is plotted in solid blue.After accounting for all of these effects, the obtained efficiency curve for the multiplexed dispersive element is reported in Figure <ref>. The bump in the central region is due to light diffracted by both gratings and that falls on the detector in different places. Finally we remind that the efficiencies presented in the simulations do not take into consideration the material absorptions or the reflection losses that could arise to the presence of interfaces inside the device. Nevertheless we expect that these effects could be negligible at this level and are of the order of few percent points.§.§.§ The case of S4 0954+65 S4 0954+65 is a bright BL Lac object identified for the first time by <cit.> which exhibits all the properties of its class. In particular, the source presents a strong variability in optical, with R apparent magnitudes usually ranging between 15 and 17 <cit.>, linear polarisation <cit.> and a radio map that shows a complex jet-like structure. This BL Lac has recently caught attention since it was detected with the Cherenkov telescope MAGIC with a 5-σ significance <cit.>. The determination of redshift of BL Lac objects (in particular for TeV sources) is mandatory to assess their cosmological role and evolution, which appears to be controversial due to redshift incompleteness <cit.> and to properly understand their radiation mechanism and energetics (see e.g <cit.> and references therein). When BL Lacs are detected at TeV regime, the knowledge of their distance is unavoidable since they could be exploited as a probe of the Extragalactic Background Light (EBL, see e.g <cit.>, <cit.>) allowing to understand how extremely high energy photons propagate from the source to the Earthand interact with the EBL through γ-γ absorption.Unfortunately, the determination of the redshift of BL Lacs has proven to be difficult (see e.g. <cit.>, <cit.>, <cit.>) since their very faint spectral features are strongly diluted by their non-thermal emission (see the review of <cit.>). In the era of 10 m class telescopes (like the GTC), the research in this field has reached the so called "photon starvation regime" since the only way to significantly increase the SNR is the adoption of extremely large aperture telescope (like ELT) <cit.>.On the other end, one can greatly increase the resolution of the secured spectra, maintaining a high SNR, decreasing the minimum Equivalent Width (EW_min), allowing to measurefainter spectral features (see e.g. <cit.>, <cit.>). In particular, S4 0954+65 has been observed by <cit.> after its outburst on the night of February 28th, 2015. The object was observed with two grisms (R1000B and R1000R) in order to ensure a spectral coverage from 4200 to ∼ 10000 Å adopting a slit of 1.00^'' with a resolution of R ∼ 800. For each grism, the total integration time was 450s that corresponds to about 0.5 hrs of telescope allocation time (including overheads). The collected data allowed to disprove previous redshift claims of z = 0.367 <cit.> and to infer a lower limit to the distance of z ≥ 0.45 thanks to EW_min∼ 0.15 Å and SNR > 100.In order to further increase the lower limit to the redshift or, even better, detect faint spectral features arising from the host galaxies that harbors this BL-Lac, the only straightforward solution with this state-of-the-art instrumentation is to drastically increase the resolution of the collected spectrum.Considering the case of GTC and OSIRIS, the only available opportunity is to observe the target with the GRISMs R2500. Unfortunately these gratings possess a very narrow spectral range so in order to ensure the wavelength coverage similar to the required (4000 - 8000 Å), one must collect four different spectra. This turns out in a telescope allocation time of about 2 hrs (including overheads).By the adoption of the two VPHG multiplexed device, the observer is able to collect simultaneously two spectra with a whole spectral range from 4000 to 8000 Å with a resolution of approximately 2000. The simulated spectra obtained with this device is reported in Figure <ref> along with the comparison of the R1000B+R observed one.We also report the distribution of minimum detectable Equivalent Width, estimated following the recipes detailed in <cit.> (histogram in the bottom right corner of Figure <ref>).The detectable EW_min on the spectrum simulated by assuming the new dispersing element is 0.03, which is a factor of 5 lower than the compared one. This turns in a lower limit to the redshift of z ≳ 0.55 putting the source at a plausible redshift region where the absorption from the EBL becomes severe and making this TeV object an excellent probe for the study of the EBL through absorption.In this figure we also report the expected SNR obtained with our device (solid magenta in the top left box), and the one simulated assuming the currently available R2500 devices at the GTC.§.§ Three-multiplexed grating caseFor the science cases that require a wide spectral range with a moderate resolution, nowadays the only possibility to fulfill the requirements is to adopt an echelle grating based instrument, which is capable to secure wide wavelength ranges in a reasonable number of shots . Otherwise according to OSIRIS GRISMs specifications, in the GTC manual, up to six different setups (and exposures) are required to obtain the same result just in terms of spectral range, since the maximum resolution is approximately R_max=2500.In this section we present a possible application of multiplexed VPHGs, aiming to refurbish the dispersive elements of OSIRIS at GTC, in order to reach the closest possible performance with respect to UV and VIS arm of X-SHOOTER. In order to cover a wide spectral range with a resolution of approximately 4500, we have designed two multiplexed dispersive elements, each one composed by three stacked layers, therefore they will produce on the detector three spectra for each single exposure.With these two devices together, in just two exposures, we can cover a range from 3500 to 10000 Å. While the number of dispersive layers could be theoretically further increased, due to complexities in calculations, possible transparency issues and manufacturing alignment, in this work we decided to set the limit to three elements.§.§.§ BLUE device, from 350 to 600 nmThe first (of two) multiplexed device will be responsible for the dispersion of the light in the range3500 – 6000 Å, therefore hereafter it will be identified as the BLUE device. It is composed by three dispersing layers, each of them generating the peaks in the summed efficiency displayed in Figure <ref> (solid blue curve). For this case, we did not report the plot with the contributions that generate the overall efficiency, since the general procedure is the same that in the two-multiplexed case. As highlighted in the previous case, the vertical solid lines identify the size of the detector with respect to each spectra: since the total range will appear divided in three parts, the upper is displayed with solid blue boundaries, the central with green and the lower with red. As some small portions of the range will overlap, bumps in efficiency in the regions between the peaksappear.In Table <ref>, we report the specifications of the three gratings that have been designed for this BLUE element along with the calculated resolution and dispersion that is achievable integrating this device in the OSIRIS spectrograph. §.§.§ RED device, from 600 to 1000 nmThis second multiplexed device will be responsible to disperse the light in the spectral range from 6000 to 10000 Å, therefore hereafter it will be identified as the RED device. Figure <ref> (solid blue curve) reports the overall efficiency curve that can be produced by the three dispersing layers composing this device.In Table <ref>, we report the specifications of the gratings that have been designed for this RED element along with the calculated resolution and dispersion that is achievable integrating this device in the OSIRIS spectrograph.§.§.§ Application to Extragalactic Astrophysics. The characterisation of Intergalactic mediumThe study of the Intergalactic and Circumgalactic medium (IGM and CGM) is a powerful tool to investigate the properties of the cool (and clumpy) gaseous halos between the observer and the source, that lies at a certain z. The only way to investigate the IGM or the CGM is through absorption lines imprinted in the spectra of distant QSO, as demonstrated in the last few years by e.g <cit.> since its surface brightness is extremely faint to be probed directly, and only few examples are know to succeed in the detection of emission of Ly-α lines in the CGM (e.g <cit.>).This research field is actively growing and, recently, has begun to probe not only the physical state and the chemical composition of the IGM but also the three-dimensional distribution of the gas allowing scientists to build up an actual tomography of the cool Universe between background quasars and the Earth. For example, in this context one of the most recent and successfully survey is the CLAMATO survey <cit.>. In this projects, authors aims to to collect spectra for 500 background Lyman-Break galaxies (LBGs) in ∼ 1 sq degree area to reconstruct a 3D map with an equivalent volume of (100 h^-1Mpc)^3. The key step in these spectroscopic studies is the availability of moderate-high resolving power (R ∼ 5000) and wide spectral coverage, in order to probe as much as possibile absorption lines, perform diagnostic ratio to probe the interplay between galaxies and the intergalactic medium (IGM).However, such observations are typically time consuming and require good SNR at moderate R, a particularly high challenge for distant QSOs which tend to be faint. For all of these reason, the availability of new instrumentation available to collect spectra in a wide spectral range (3500-1 μm) at R > 4000 would be really advantageous allowing to further increase the availability of telescopes able to tackle these kind of surveys, especially for facilities with moderate telescope aperture. In order to demonstrate the applicability of our new device, we simulated the expected performance by assuming to observe for t_exp = 200s for each grating a QSO (template taken from <cit.>) at redshift z = 3.78 with m_R∼ 17. The overall obtained spectrum is reported in Figure <ref>. In particular the solid blue line corresponds to emission spectrum ofthe quasar secured with the BLUE multiplexed device. The absorption lines, imprinted by Lyman-α intervening systems and used to probe the IGM, are clearly detected and resolved in most of the cases. The solid red line, instead, report the spectrum recorded with the RED multiplexed device where emission line fromC IV and C III] are visibile. Results reported in Figure <ref> are obtained with a total integration time of about 400s while, by comparison, to obtain the same results at half resolution with grisms available at GTC-OSIRIS would require more than 1000s, since it should be observed four times with 4 different gratings.As highlighted in the previous paragraphs, the X-SHOOTER spectrograph is able to obtain similar results with a broader band in a single snapshot. Although this outcome is obviously outside the capabilities of our proposed solution, the multiplexing VPHG allows to cover in just two snapshot a comparable quality (in terms of R, SNR and spectral range) in the UV and visible band. Therefore, the integration of such element in a facility like OSIRIS would allow to scientifically compete with key-science projects that require spectroscopic capabilities otherwise available only with major instrument commissioning. § CONCLUSIONSWe have demonstrated the theoretical feasibility and the advantages of an innovative dispersive element, able to greatly increase the performances of the existing spectrograph at the state of the art 10 m telescope GTC. Thanks to the advantages derived by the adoption of the photopolymeric material considered in the simulations, we achieved to increase by at least a factor of two in terms of resolution (and thus in the spent observing time), without changes in the optical layout of the spectroscopic instrument. We have also shown that in the case of the three-multiplexed VPHG, it is possible to reach with GTC OSIRIS, approximately the performances of the UV and VIS arm of X-SHOOTER (when operating in medium resolution) in just two exposures of the same target. Even though in this work we have selected GTC OSIRIS for the simulations, the philosophy behind this multiplexing design could be applied to almost every focal reducer spectrograph, donating the discussed advantages to all the instruments, allowing them to handle scientific cases that would be otherwise out of reach for these facilities. In the forthcoming second part of this work, we will realize and integrate the multiplexed device in a spectrograph for science verification, focusing on the observational cases highlighted in the simulations in this paper.§ ACKNOWLEDGEMENTS This work was partly supported by the European Community (FP7) through the OPTICON project (Optical Infrared Co-ordination Network for astronomy) and by the INAF through the TECNO-INAF 2014 Innovative tools for high resolution and infrared spectroscopy based on non-standard volume phase holographic gratings. mnras
http://arxiv.org/abs/1704.08150v1
{ "authors": [ "Zanutta Alessio", "Landoni Marco", "Riva Marco", "Bianco Andrea" ], "categories": [ "astro-ph.IM" ], "primary_category": "astro-ph.IM", "published": "20170426145838", "title": "Spectral multiplexing using stacked VPHGs - Part I" }
Numerical treatment to a parabolic non-local free boundary problem ]Numerical treatment to anon-localparabolic free boundary problem arising in financial bubbles[A. Arakelyan]Institute of Mathematics,NAS of Armenia0019 Yerevan, Armenia [Avetik Arakelyan][email protected] [R. Barkhudaryan]Institute of Mathematics, NAS of Armenia0019 Yerevan, Armenia [Rafayel Barkhudaryan][email protected][H. Shahgholian]Department of Mathematics The Royal Institute of Technology 100 44 Stockholm, Sweden [Henrik Shahgholian][email protected] [M. Salehi]Department of Mathematics, Statistics and Physics, Qatar University, P.O. Box 2713, Doha, Qatar [Mohammad Mahmoud Salehi][email protected][2010]35R35; 35D40; 65M06; 91G80. In this paper we continue to study a non-local free boundary problem arising in financial bubbles. We focus on the parabolic counterpart of the bubble problem andsuggestan iterative algorithmwhich consists ofa sequence of parabolic obstacle problems at each step to be solved, that in turn gives thenextobstacle functionin the iteration. The convergence of the proposedalgorithm is proved. Moreover, we consider the finite difference scheme for this algorithm and obtain its convergence. At the end of the paper we presentand discuss computational results.[ M. Salehi=============§ INTRODUCTIONThe problem of financial bubbles, which arises in the modeling of speculative bubble, is subject ofstudy in this paper.In <cit.> the authors suggest amodelfor studyingspeculative trading, wherethe ownership of a share of stock allows traders to profit from other'sovervaluation. The model assumes that trading agents “agree to disagree” and there should be restrictions on short selling.As a result asset prices may become higher than their fundamental values. Other important part of the model is overconfidence,i.e. the agent believe that his information is more correct for a disagreement as that ofothers. Our departing point shall be the stationary case ofsuch a model, formulated in terms of viscosity solutions. In a forthcoming paper we shall consider the parabolic model, and the behavior of the problem as time evolves.The stationary version of this modelwas introduced and solved by Scheinkman and Xiong <cit.>. The article <cit.> considers one dimensional case and in that specific case it is possible toconstruct an explicit stationary solution, which was done in <cit.> based on Kummer functions. Other stationary modelsrelated to the one in <cit.> were studied in<cit.>. In this paper we shall analyze the time dependent(parabolic)free boundary problem for a financial bubble problem, from a PDE point of view. Hence the model equation, studied in this paper,isthe following free boundary problem formulated as a Hamilton-Jacobi equation:min(Lu,u(t,x)-u(t,-x)-ψ(t,x))=0, (t,x)∈ℝ^+×Ω,where Ω⊂ℝis a symmetric bounded domain such thatif x∈Ω then -x∈Ωand ψ∈ C^2(ℝ^+×Ω).As mentioned above we consider time depended parabolic case, i.e. the operator L is the following parabolic operatorLu=u_t+Muwhere M is the elliptic counter part of the operator L as defined below:Mu=a^ij(x)D_iju+b^i(x)D_iu+c(x)u, a^i,j=a^j,i.Here the coefficients a^i,j, b^i, c areassumed to becontinuesand the matrix [a^i,j(x)] is positive definitefor all x∈Ω. Additionally we assume that thecoefficients are “symmetric” in the domain Ω i.e. the operator applied to the function u(-x) should be the same as operator applied to the function u at point -x:(Lu)(x)=(Lu)(x)where u(t,x)=u(t,-x). The relation (<ref>) is the same if we require that a^ij and c are even functionsandb^i is odd. If the domain Ω is bounded we are going to consider the problem with the following initial and boundary conditionsu(0,x)=g(0,x),x∈Ω,u(t,x)=g(t,x),(t,x)∈ℝ^+×∂Ω, The specific and important case is the Black-Scholes equation i.e. the domain Ω=ℝ and the differential operator is the followingM=-1/2σ^2 u_xx+ρ x u_x+ ru.Our main concern is to develop numerical method for the following non-local free boundary problem:min(∂_t u+Mu,u(t,x)-u(t,-x)-ψ(t,x))=0, (t,x)∈ℝ^+×Ω,u(0,x)=g(0,x),x∈Ω,u(t,x)=g(t,x),(t,x)∈ℝ^+×∂Ω,where ψ(t,x)=x1 - e^-(r+λ)t/r+λ- cσ,ρ,r>0. Problem (<ref>) arises in modeling of speculative financial bubbles.The financial model of speculative trading described in <cit.> where it is allowed to profit from other over-valuation and additionally assuming that trading agents may “agree to disagree”. Due tospeculative trading, asset prices may beat their fundamental values. The stationary or finite horizon version of this model was introduced and solved by Scheinkman and Xiong <cit.>. Scheinkman and Xiong considers one-dimensional case and they are able to construct an explicit solution based on Kummer functions.It was possible to do in one dimensional case as the solution of Black-Scholes equation is possible to express true Kummer function. Other stationary models were studied in <cit.>. The multidimensional stationary problemwas considered in <cit.> where existence and uniqueness of the viscosity solution were proved. It is apparent thatthat ifu(t,x) is asolution to equation (<ref>), then u(t,-x) is a solution to the reflected problem, withall ingredientsreflected accordingly. In particular, u(t,x) ≥ u(t,-x) + ψ(t,x) ≥ u(t,x) + ψ (t,-x), and ψ (t,x) + ψ (t,-x) ≤ 0 is forced as a condition for anexistence theory. A standing assumption in this paper is thatthe constraint ψ,and the boundary data g(t,x),should satisfy the following inequality ψ(t,x) + ψ(t,-x) ≤ 0 ,ψ(t,x)≤ g(t,x).For moredetails on this problem see <cit.>. For review ofthe obstacle type PDE models in the socio-economic sciences see <cit.>, other theoretical aspects of obstacle-type problems you can see in <cit.>.In this paper ourobjective is to study, through an increasing iterative algorithm, a solution of the above problem in [0,T]×Ω. The algorithm consists ofa sequence of parabolic obstacle problems at each step that eventually converge to the solution. We also study the corresponding difference scheme developed for the iterative algorithm.§ THE ITERATIVE ALGORITHMTo deal with the problemwe firstrecall the definition of theso-called viscosity solution following<cit.>. A function u:ℝ^+×Ω→ℝ is a viscosity subsolution (resp. supersolution) of (<ref>) onℝ^+×Ω, if u is upper semi-continuous (resp. lower semi-continuous), and if for any function φ∈ C^1,2(ℝ^+×Ω) and any point (t_0,x_0) ∈ℝ^+×Ω such that u(t_0,x_0) = φ(t_0,x_0) andu≤φ (resp.u ≥φ) on ℝ^+×Ω , u≤ g,(resp.u≥ g ), onthe boundary of ℝ^+×Ω,thenmin(Lφ(t_0,x_0), φ(t_0,x_0)-φ̃(t_0,x_0)-ψ(t_0,x_0)) ≤ 0,(resp.min(Lφ(t_0,x_0), φ(t_0,x_0) - φ̃(t_0,x_0)-ψ(t_0,x_0)) ≥ 0).A function u :ℝ^+×Ω→ Ris a viscosity solution of (<ref>) on Ω, if and only if u^* is a viscosity subsolution and u_* is a viscosity supersolution on Ω, whereu^*(x)=lim sup_(l,y)→ (t,x)u(l,y),andu_*(x)=lim inf_(l,y)→ (t,x)u(l,y). To construct the algorithm, at first letus define a function u_0 (the initial guess) as the solution of the following problem:L u_0=0, (t,x)∈ℝ^+×Ω, u_0(0,x)= g(0,x),x∈Ω, u_0(t,x)=g(t,x),(t,x)∈ℝ^+×∂Ω,Inductively, we define the sequence{u_k} bymin(L u_k+1,u_k+1-ũ_k-ψ)=0, (t,x)∈ℝ^+×Ω, u_k+1(0,x)=g(0,x),x∈Ω, u_k+1(t,x)=g(t,x),(t,x)∈ℝ^+×∂Ω,For each k we have an obstacle problem with the obstacle ũ_k+ψ. In this section our goal is to show that (<ref>) produces a non-decreasing sequence {u_k} and then to show thatthe sequence {u_k}tends to the viscosity solution of (<ref>).The sequence {u_k} is non-decreasing. It is easy to see that u_1≥ u_0, since the function u_1 is the solution of L u_1(t,x)=0 with obstacle u_0+ψ, and u_0 is the solution to the same problem without an obstacle.To prove that u_2(t,x)≥ u_1(t,x) let us examineequation (<ref>).When k=1 the obstacleis Ψ_2:=u_1(t,-x)+ψ(t,x) andfor the case k=0 the obstacle is Ψ_1:=u_0(t,-x)+ψ(t,x).Sinceu_1(t,x)≥ u_0(t,x) we haveu_1(t,-x)≥ u_0(t,-x) and henceΨ_2 ≥Ψ_1 . Furthermore,u_2(x) and u_1(x) are solutions of the same obstacle problem with obstacles Ψ_2, and Ψ_1 respectively. Since Ψ_2 ≥Ψ_1 we have by comparison principle (see <cit.>, page 80, problem 5) u_2(t,x)≥ u_1(t,x).By inductive steps we have that u_k+1 and u_k solve the obstacle problem with obstacle Ψ_k+1:= u_k(t,-x) + ψ and Ψ_k:= u_k-1(t,-x) + ψ respectively, with u_k≥ u_k-1, and hence Ψ_k+1≥Ψ_k, and hence by comparison principle u_k+1≥ u_k. We need to prove that the algorithm is bounded above by the solution of (<ref>). If w is a solution of the problem (<ref>) then u_k≤ w.First of all u_0≤ w since the function w is the solution of ∂_t w+M w=0 with obstacle w̃+ψ, and u_0 is the solution to the same problem without an obstacle.If we have u_k(t,x)<w(t,x) then by induction we can conclude that u_k+1(t,x)<w(t,x) since the function w is the solution of ∂_t w+M w=0 with obstacle w̃+ψ, and u_k+1 is the solution to the same problem with smaller obstacle ũ_k+ψ≤w̃+ψ. If the domain Ω=ℝ and the operator L is Black-Scholes operator, the existence of solution is proved in <cit.> so the algorithm is bounded. If the domain Ω is bounded, the boundedness of the algorithm is proved in Proposition <ref>. If the domain Ω is bounded then the sequence {u_k(t,x)} is bounded for every fixed t, i.e. there exists a constant M_t such that u_k(t,x)≤ M_t,for all k∈ℕ and x∈Ω.Let T_0 be fixed and h(x) be asymmetric function defined in Ω and satisfiesM h(x) = 1,h(x)≥ 1, ψ(t,x) ≤ C_T_0 h(x)for some large C_T_0, such that ∂_t ψ(t,x)+Mψ(t,x) +C_T_0≥ 1 holds in [0,T_0]×Ω; here we have assumed ψ(t,x) ∈ C^1,2([0,+∞)×Ω). Then from the algorithm defined in (<ref>) we have min(∂_t v_k+1(t,x)+Mv_k+1(t,x) + 1, v_k+1(t,x) - u_k(t,-x)-C_T_0h(x)) ≤ 0,where v_k = u_k - ψ + Ch.Now suppose by induction that max (sup v_k(t,x), sup (v_k(t,x) + ψ(t,x))) ≤ M_T_0, [0,T_0]×Ω, whereM_T_0:={[0,T_0]×Ωsup (|g(t,x)|+|ψ(t,x)|+|C_T_0h(x)|), [0,T_0]×Ωsup v_0(t,x)}.The estimate (<ref>) is obviously true for the starting value u_0. Let further the maximum value of v_k+1 be achieved at a point (t^*,x^*) ∈ [0,T_0]×Ω̅.If it is attained on the boundary then we are done. If it is attained inside the domain, then by the ellipticity of the operator(and concavity of the graph for v_k+1 at (t^*,x^*)) we have Mv_k+1 (t^*,x^*)-∂_t v_k+1 (t^*,x^*)≤ 0, which implies ∂_t v_k+1 (t^*,x^*)+Mv_k+1 (t^*,x^*) + 1 > 0, and hence by the inequality (<ref>) we havev_k+1 (t^*,x^*)≤ u_k (t^*,-x^*)+ C_T_0h(x^*) = v_k(t^*,-x^*)+ ψ (t^*,-x^*) ≤ M_T_0 where we have used h(x^*) = h(-x^*). It remains to prove v_k+1(t,x) + ψ(t,x) ≤ M_T_0. Wemake a similar argument for w_k+1(t,x) := v_k+1(t,-x) + ψ(,t-x)= u_k+1 (t,-x) + C_T_0h(x) which satisfies a similar type of equation, with reflected version of the ingredientsmin(∂_t w_k+1(t,x)+Mw_k+1(t,x) + 1, w_k+1(t,x) - u_k(t,x) -C_T_0h(x)-ψ (t,-x)) ≤ 0.As before let the maximum value of w_k+1 be achieved at a point(t^*,x^*) ∈ [0,T_0]×Ω̅. Obviously, if the maximum is on the boundary then we have the desired estimate. Hence we assume the maximumis attained inside the domain andby using the same arguments as above we will have the followingw_k+1(t,x) ≤ u_k(t,x) + ψ (t,-x) + C_T_0h(x)≤≤u_k(t,x) - ψ (t,x)+ C_T_0h(x) = v_k(t,x) ≤ M_T_0,where we have usedψ (t,x) + ψ(t,-x) ≤ 0. Hence we arrive atmax (sup v_k+1(t,x), sup (v_k+1(t,x) + ψ(t,x)) ≤ M_T_0, in the inductive steps. This completes the proof.§.§ Convergence of the iterative algorithm If u^k(t,x) is the iterative algorithmgiven by (<ref>), and u=k→∞limu^k, then u is a uniquecontinuous viscosity solution of (<ref>).Having a bounded increasingsequence of continuous functions, the limit function u is lower semi-continuous, i.e.u(t,x)=u_*(t,x):=lim inf_(l,y)→ (t,x)u(l,y).We also denote by u^* the upper-semi continuous envelop of u, i.e.u^*(t,x):=lim sup_(l,y)→ (t,x)u(l,y).First we showthat the function u^* is a sub-solution to (<ref>). For that purpose let us suppose u^* is not a sub-solution. Then there exists (t_0,x_0)∈ℝ^+×Ω and a polynomialP of degree two satisfyingP≥ u^*,P(t_0,x_0)=u^*(t_0,x_0)such thatmin{∂_t P+M P,P-P̃ - ψ}>0.Assume that the first inequality holds. Then∂_t P(t_0,x_0)+M P(t_0,x_0)>0andP(t_0,x_0)>P̃(t_0,x_0)+ψ(t_0,x_0).Substituting the values for P(t_0,x_0), the last inequality can be rewritten in the following way:u^*(t_0,x_0)>ũ^*(t_0,x_0)+ψ(t_0,x_0).Using continuity of f and ψ, and the fact u_j↑ u^*, we can deduce that there exists a number r>0 such that∂_t P(t,x)+M P(t,x) >0, (t,x)∈ B((t_0,x_0),r), u^*(t,x)>ũ^*(t,x)+ψ(t,x), (t,x)∈ B((t_0,x_0),r).and (using continuity of u^*-ũ^*) there exists a positive number μ <u^*(t_0,x_0)-ũ^*(t_0,x_0)-ψ(t_0,x_0) such thatu_j(t,x)> ũ_j-1(t,x)+ψ(t,x)+μ, (t,x)∈× B((t_0,x_0),r).DenoteP_ε=P+εη(x),where η(t,x) is a function which satisfy ∂_tη- Lη=-1, η≥0 and η(t_0, x_0)=0. If ε>0 is small enough, then∂_t P_ε(t,x)+M P_ε(t,x)>0, (t,x)∈× B((t_0,x_0),r).Next observe thatP_ε(t,x)> P(t,x) ≥ u^*(t,x), (t,x)≠ (t_0,x_0).As u_k↑ u^*, we can choose j large enough to satisfyinf_B((t_0,x_0),r)(P_ε-u_j)<min_∂ B((t_0,x_0),r)(P_ε-u^*)andinf_B((t_0,x_0),r)(P_ε-u_j)<μ.Take Q_ε=P_ε-c, where c is a constant chosen in such a way, that Q_ε touches u_j from above at some (t',x')∈ B((t_0,x_0),r) (the inequality (<ref>) guarantees that the first touch point in B((t_0,x_0),r) will be not on the boundary of B((t_0,x_0),r)). We have constructed at this point a function Q_ε satisfyingthe following conditions:Q_ε (t,x)≥ u_j(t,x), (t,x)∈ B((t_0,x_0),r) Q_ε (t',x')=u_j(t',x'), where (t',x')∈ B((t_0,x_0),r). Since u_j is a viscosity subsolution (and, in fact, a solution) ofmin{∂_t u_j +Mu_j,u_j-ũ_j-1-ψ}=0, u_j|_∂Ω=g_1,then, by the definition of viscosity subsolution and (<ref>)-(<ref>), we obtainmin{∂_tQ_ε(t',x')+M Q_ε(t',x'), Q_ε(t',x')-ũ_j-1(t',x')-ψ(t',x')}≤0 .Using (<ref>), we have ∂_t Q_ε (t',x')+M Q_ε (t',x')>0, so the only possibility to satisfy (<ref>) isQ_ε(t',x')≤ũ_j-1(t',x')+ψ(t',x').This means thatP(t',x')≤ũ_j-1(t',x')+ψ(t',x') +c - εη(t',x').Thenu_j(t',x')≤ u^*(t',x')≤ P(t',x')≤ũ_j-1(t',x')+ψ(t',x') +c - εη(t',x'),henceu_j(t',x')≤ũ_j-1(t',x')+ψ(t',x') +c.But, we deduce from (<ref>)u_j(t',x')> u_j-1(t',x')+ψ(t',x') +μ.This is a contradiction, since by (<ref>), it follows c<μ. Henceu^* is a sub-solution of (<ref>).Let us now discuss the super-solution properties ofu=u_*, see (<ref>). Supposeu is not a super-solution. Then there exists (t_0,x_0)∈ℝ^+×Ω and a polynomial P of degree two satisfyingP≤ u,P(t_0,x_0)=u(t_0,x_0)such thatmin{∂_t P(t_0,x_0)+M P(t_0,x_0),P(t_0,x_0)-P̃(t_0,x_0) - ψ(t_0,x_0)}<0. Then∂_t P(t_0,x_0)+M P(t_0,x_0)<0orP(t_0,x_0)<P̃(t_0,x_0)+ψ(t_0,x_0).Let us consider the first inequality. Using continuity of u_j, f we can deduce that there exists a number r>0such that∂_t P(t,x) +M P(t,x)<0, (t,x)∈ B((t_0,x_0),r).Like in the previous case we will construct new polynomial Q=P-c which will touch u_j at some point x'∈ B((t_0,x_0),r) i.e.Q_ε (t,x)≤ u_j(t,x), (t,x)∈ B((t_0,x_0),r) Q (t',x')=u_j(t',x'),where (t',x')∈ B((t_0,x_0),r). Since u_j is a viscosity supersolution (and, in fact, a solution) ofmin{∂_t u_j +Mu_j, u_j- ũ_j-1-ψ}=0, u_j|_∂Ω=g_1,we will get contradiction as ∂_t Q+MQ<0.It remains to show that inequality (<ref>) also cannot be hold. For that purpose let us substitute the values for P and rewrite (<ref>):u(t_0,x_0)-ũ(t_0,x_0)<ψ(t_0,x_0).The function u-ũ is continuous and if the value of j is enough big then u_j(t,x)-ũ_j-1(t,x)<ψ(t,x), (t,x)∈ B((t_0,x_0),r).This is a contradiction as u_j should satisfy (<ref>)The continuity of u follows from the comparison principle (see <cit.>). Indeed, it follows from the comparison principlethat the super solution should be greater or equal to the subsolution, but from the definition of w it follow that u^*≥ u, so u=u^* is a continuous viscosity solution of (<ref>). § FINITE DIFFERENCE SCHEME FOR THE ITERATIVE ALGORITHM For every step of the above algorithm we should solve an obstacle problem and we are going to use finite difference schemeto do this numerically. The finite difference scheme was extensively used for numerical solutions of variational inequalities, one-phase obstacle problems of elliptic and parabolic type, and in particular, for valuation of American type option (for details, see <cit.> and references in these papers). In 2009, the explicit finite difference scheme has been applied for one-dimen­sio­nal parabolic obstacle problem in connection with valuation of American type options (see <cit.>). It has been proved, that under some natural conditions, the finite difference scheme converges to the exact solution and the rate of convergence is O(√(Δ t)+Δ x). Here Δ x and Δ t are space- and time- discretization steps. Recently in the works<cit.> finite difference scheme and the convergence results have been applied for the one-phase and two-phase elliptic obstacle problems.In this section we assume that our bubble problem (<ref>) is defined on Ω^T=[0,T]×Ω, where Ω=[-a,a], and M is taken the Black-Scholes operator as defined in introduction. To construct a finite difference scheme we start by discretizing the domain Ω^T=[0,T]×Ω into a regular uniform mesh. We will denote by Ω_h and Ω_h^T the uniform discretized sets ofΩ and Ω^T respectively.For the sake of convenience we seth as a shorthand of a pair (Δ x, Δ t).Thus, Ω_h^T={(nΔ t,-a+ mΔ x )∈ℝ^2, n=0,1,2,…,N m=0,1,2,…,M}, where Δ t=T/N, and Δ x=2a/M.The discrete Black-Scholes operator is defined as follows L_hv(t,x)≡v(t,x)-v(t-Δ t,x)/Δ t-σ^2/2·v(t,x+Δ x)-2v(t,x)+v(t,x-Δ x)/(Δ x)^2++ ρ x v(t,x+Δ x)-v(t,x-Δ x)/2Δ x+rv(t,x),for anyinterior point (t,x)∈Ω_h^T.Let u^(k)=u^(k)(t,x) be a solution to the iterated obstacle problem with obstacle u^(k-1)(t,-x)+ψ(t,x). Byu^(k)_hwe set the solutionto the following nonlinear system: min(L_h u_h^(k)(t,x),u_h^(k)(t,x)-u^(k-1)(t,-x)-ψ(t,x))=0,(t,x)∈Ω_h^T,u_h^(k)(0,x)=g(0,x),x∈Ω_h,u_h^(k)(nΔ t,± a)=g(nΔ t,± a), n=0,1,2,…,N. We setthe variational form of the parabolic obstacle problemF_h^g[v]≡min(L_h v(t,x),v(t,x)-g(t,x)).Then the following discrete comparison principle for the difference schemes holds. Let L_h defined by (<ref>) satisfying |ρ x|≤σ^2/Δ x, for every x∈Ω, where Ω=[-a,a]. If u(t,x) and v(t,x) are piecewise continuous functions and satisfy F_h^g[u]≥ F_h^g[v],(t,x)∈ [0,T]×Ω, u(0,x) ≥ v(0,x),x∈Ω_h,then u(t,x) ≥ v(t,x),(t,x)∈Ω_h^T. We shall prove by inductionu(nΔ t,x) ≥ v(nΔ t,x), ∀ x∈Ω_h,for all n=0,1,2,…,N, where N=T/Δ t. In the case n=0 the inequality (<ref>) coincides with the lemma assumption.Assume that (<ref>) holds for n=k, we shall prove that it holds for n=k+1 as well. We set t^k+1≡(k+1)Δ t.For (t^k+1,x) with x∈Ω_h, if v(t^k+1,x)-g(t^k+1,x)=F_h^g[v](t^k+1,x), then clearlywe getu(t^k+1,x)-g(t^k+1,x)≥ F_h^g[u](t^k+1,x)≥ F_h^g[v](t^k+1,x)=v(t^k+1,x)-g(t^k+1,x),which implies u(t^k+1,x)≥ v(t^k+1,x) in this case. On the other hand, if L_h v(t^k+1,x)=F_h^g[v](t^k+1,x), then L_h u(t^k+1,x)≥ F_h^g[u](t^k+1,x)≥ F_h^g[v](t^k+1,x)=L_h v(t^k+1,x),for every x∈Ω_h. In the sequel we use the following notation:w_m^k≡ w(kΔ t,-a+mΔ x)≡ w(t^k,x_m),for all k=1,2,…,N and m=0,1,2,…,M. In view of (<ref>) we have0≤ L_h[u(t^k+1,x)-v(t^k+1,x)]=L_h[u_m^k+1-v_m^k+1],where x=-a+mΔ x ∈Ω_h. Using the definition (<ref>) after simple computation one gets0≤Δ t· L_h[u_m^k+1-v_m^k+1]=e(u_m^k+1-v_m^k+1)+ +d_m(u_m+1^k+1-v_m+1^k+1)+f_m(u_m-1^k+1-v_m-1^k+1)-(u_m^k-v_m^k),wheree=(1+σ^2Δ t/(Δ x)^2+rΔ t), d_m=(ρ x_m /2Δ t/Δ x-σ^2Δ t/2(Δ x)^2), f_m=(-ρ x_m /2Δ t/Δ x-σ^2Δ t/2(Δ x)^2).Let us rewrite in the matrix form the above equation for all m=0,1,2,…,M. We haveΔ t· L_h(U^k+1-V^k+1)=A·(U^k+1-V^k+1)-(U^k-V^k),where L_h(U^k+1-V^k+1),U^k+1 and V^k+1 are columnmatrices of L_h[u_m^k+1-v_m^k+1],u_m^k+1 and v_m^k+1 respectively. The matrix A will bea tridiagonal matrix, such that A=e I-B, where I is an identity matrix, B is a tridiagonal matrix with 0 on the main diagonal and -d_m, -f_mon the first diagonals above and below to the main diagonal. Accordingto <cit.>the matrix A satisfies the properties of an M-matrix. Thus, there exists an inverse matrix A^-1 with non-negativeelements, provided e>ρ(B), where ρ(B) is the spectral radius of the matrix B. Let us verify the condition e>ρ(B). To this end, we observe that ρ(B)≤ ||B||, where the norm||.|| is taken with respect to the rows, i.e. ||B||:=imax (l∑|b_i,l|). On the other hand, according to the definition of B we get||B||=imax (l∑|b_i,l|)=imax (|f_i|+|d_i|)=σ^2Δ t/(Δ x)^2< e,due to the lemma condition |ρ x|≤σ^2/Δ x. Hence, ρ(B)≤ ||B||<e.Now,multiplying byA^-1 both sides of the equation (<ref>) we arrive at:(U^k+1-V^k+1)=A^-1·(U^k-V^k)+Δ tA^-1· L_h(U^k+1-V^k+1).Recalling that the elements of A^-1 and L_h(U^k+1-V^k+1) are non-negative we conclude that u_m^k≥ v_m^k implies u_m^k+1≥ v_m^k+1. This completes the proof. Let u_h^(k)(t,x) be a solution to (<ref>), when |ρ x|≤σ^2/Δ x for every x∈Ω. Then for every k∈ℕ we have u_h^(k)(t,x)≤ u_h^(k+1)(t,x), (t,x)∈Ω_h^T. Moreover, this sequence isbounded above, which in turn implies its convergence when k→∞. To prove the statement we applythe discrete comparison principle for the variational form of the obstacle problems.In view of Proposition <ref>we have u^(k-1)(t,-x)≤ u^(k)(t,-x). This implies 0=min{L_hu^(k+1)_h(t,x),u^(k+1)_h(t,x)-u^(k)(t,-x)-ψ(t,x)}= =min{L_hu^(k)_h(t,x),u^(k)_h(t,x)-u^(k-1)(t,-x)-ψ(t,x)}≥ ≥min{L_hu^(k)_h(t,x),u^(k)_h(t,x)-u^(k)(t,-x)-ψ(t,x)}. Thus, one can apply the discrete comparison principle (see Lemma <ref>)for a parabolic obstacle problem with obstacle u^(k)(t,-x)+ψ(t,x). This yields u_h^(k+1)(t,x)≥ u_h^(k)(t,x) for all (t,x)∈Ω_h^T. Let usprove the boundedness of the sequence u_h^(k)(t,x). To do this, we set by M=([0,T]×Ωsup |ψ(t,x)|+[0,T]×Ωsup|w(t,x)|)<+∞, where ψ(t,x) is the obstacle and w(t,x) is a continuous viscosity solution of our parabolic bubble problem (<ref>). Then we obtainmin{L_hu^(k)_h(t,x),u^(k)_h(t,x)-u^(k-1)(t,-x)-ψ(t,x)}=0≤ ≤min{L_h (M),M-u^(k-1)(t,-x)-ψ(t,x)}.Since u^(k)_h(0,x)≤ M for every x∈Ω_h, then the discrete comparison principle (Lemma <ref>) implies u^(k)_h(t,x)≤ M for every (t,x)∈Ω_h^T. Let ε>0 be a fixed numberand δ_k>0 is taken small such that|(u^(k-1)∗φ_δ)-u^(k-1)|≤ε,(t,x)∈Ω^T,whenever δ≤δ_k. Then we have the following uniform estimate |u^(k),δ_h(t,x)-u^(k)_h(t,x)|≤ε, for every δ≤δ_k, and (t,x)∈Ω_h^T The proof again relies onthe discrete comparison principle for the variational form of the obstacle problems.We have 0 =min{L_hu^(k)_h(t,x),u^(k)_h(t,x)-u^(k-1)(t,-x)-ψ(t,x)}==min{L_hu^(k),δ_h(t,x),u^(k),δ_h(t,x)-(u^(k-1)∗φ_δ)(t,-x)-ψ(t,x)}≥≥min{L_h(u^(k),δ_h(t,x)-ε),u^(k),δ_h(t,x)-u^(k-1)(t,-x)-ψ(t,x)-ε}. The discrete comparison principle implies u^(k)_h(t,x)≥ u^(k),δ_h(t,x)-ε. In a similar way we will obtain that u^(k)_h(t,x)≤ u^(k),δ_h(t,x)+ε. This completes the proof. Next, we want to estimate|u^(k),δ-u^(k),δ_h|.This is nothing else but the difference between the exact solution and the difference scheme for a standard parabolic obstacle problem with smooth obstacle. In recent yearsthere has been given much attention to these type of estimates (see <cit.>). We will mainly follow the above mentioned work <cit.>, which considers the problem forAmerican option valuation. It is worthwhile to mention that they obtain the convergence rate of the orderO(Δ x + √(Δ t)).For everyk∈ℕthere exist δ_k>0 depending on k,and aconstantC_Ω,k>0 dependingon Ω and k, such that |u^(k),δ(t,x)-u^(k),δ_h(t,x)|≤ C_Ω,k· (Δ x + √(Δ t)),∀ (t,x)∈Ω_h^T, for all δ≤δ_k. We are in a position to prove the convergence of the difference scheme for the iterative algorithm. Let|ρ x|≤σ^2/Δ x for every x∈Ω, andw(t,x) be a continuous viscosity solution to the parabolic bubble problem (<ref>) determined on [0,T]×Ω. Defineu^(k)(t,x) to be an increasing iterative sequence (<ref>). If we set byu^(k)_h(t,x) the corresponding difference scheme, thenh→ 0lim(k→∞limu_h^(k)(t,x))=w(t,x).Assume that w(t,x) is a continuous viscosity solution to the parabolic bubble problem. Due to Lemma <ref> we know that u^(k)_h(t,x) is convergent and therefore there exist some v_h(t,x) such that u^(k)_h↗ v_h as k→∞. By Theorem <ref> it is clear that v_h is a solution to min{L_h v_h(t,x),v_h(t,x)-w(t,-x)-ψ(t,x)}=0,(t,x)∈Ω_h^T,v_h(0,x)=g(0,x),x∈Ω_h,v_h(nΔ t,± a)=g(nΔ t,± a), n=0,1,2,…,N. On the other hand v_h(t,x) is a difference scheme for the following parabolic obstacle problem:min{L v(t,x),v(t,x)-w(t,-x)-ψ(t,x)}=0,(t,x)∈Ω^T,v(0,x)=g(0,x),x∈Ω,v(t,x)=g(t,x),(t,x)∈ [0,T]×∂Ω.The solution to the above obstacle problem (<ref>) is unique. But w(t,x)also solves (<ref>), which implies v(t,x)=w(t,x). Now applying Barles-Souganidis theoremfor difference schemes (see <cit.>)we obtain v_h(t,x)→ v(t,x) uniformly as h→ 0.This completes the proof.Next, we want to estimate|w(t,x)-v_h(t,x)|.This is nothing else but the difference between the exact solution and the difference scheme for a parabolic obstacle problem (<ref>). In recent yearsthere has been given much attention to these type of estimates (see <cit.>). We will mainly follow the above mentioned work <cit.>, which considers the problem forAmerican option valuation. It is worthwhile to mention that they obtain the convergence rate of the orderO(Δ x + √(Δ t)). Let w(t,x) be aviscosity solution to the parabolic bubble problem (<ref>). If w(t,x)∈ C^2,1_x,t(Ω^T), then |w(t,x)-v_h(t,x)|≤ C_Ω· (Δ x + √(Δ t)),where v_h(t,x)=k→∞limu_h^(k)(t,x). For the proof we recall the parabolic obstacle problem (<ref>). As we have seen its solution is w(t,x), hence we consider the error analysis for the equation (<ref>) with obstacle g(t,x)=w(t,-x)+ψ(t,x).Then we proceed as in <cit.>.§ NUMERICAL RESULTSIn this section we present computational test for the non-local parabolic bubble problem. We consider numerical solution of the parabolic financial bubble problem in the domain Ω^T=[0,3]× [-2,2].min(∂_t u -(1/2)σ^2 u” + ρ x u' + r u, u-ũ-ψ)=0, (t,x)∈Ω^T,where r = 10, ρ = 5, σ = 1, λ = 1 and the obstacle function is ψ(x)=x1 -e^-(r + λ)*t/(r + λ) - 0.001.The numerical solution and its level sets are shown in Figure <ref> with the use of 50× 50 discretization points and after 5 iterations steps.§.§ One dimensional stationary caseOne dimensional stationary case of financial bubble was considered in <cit.>, where exact solution was constructed. Following <cit.> we are going to consider one dimensional stationary case.Leth(x)= U(r/2ρ,1/2,ρ/σ^2x^2)x≤ 0, 2π/Γ(1/2+(r/2ρ))Γ(1/2)M(r/2ρ,1/2,ρ/σ^2x^2)-U(r/2ρ,1/2,ρ/σ^2x^2)x>0, where Γ is the gamma function, and U:ℝ^3→ℝ is a confluent hypergeometric function of the first kind, M:ℝ^3→ℝ is a confluent hypergeometric function of the second kind. The function h(x)is positive and increasing in (-∞,0). Using h function, the exact solution of the bubble problem can be written as q(x)=b/h(-k^*)h(x), x<k^*, x/r+λ+b/h(-k^*)h(-x)-c, x<k^*, where b=1/r+λh(-k^*)/h'(k^*)+h'(-k^*)and k^* is a free boundary of the problem which satisfies[k^*-c(r+λ)][h'(k^*) + h' (-k^*)]-h(k^* ) + h(-k^*) = 0.Equation (<ref>) can be rewritten in simpler form.U(1/2(r/ρ-1),1/2,k^*^2 ρ/σ ^2)× ×(2 k^*^2 ρ ^2 (k^*-c (λ +r))+σ^2 (c (λ +r) (ρ -r)+k^* (r-2 ρ )))+ + k^*(r-2 ρ) U(1/2(r/ρ-1),3/2,k^*^2 ρ/σ ^2) (2 k^* ρ(k^*-c (λ +r))-σ ^2)=0. Next example is devoted to the stationary case of the problem (<ref>), and we are going to compare exact solution with the numerical solution of the iterative algorithm. Here we will consider the finite horizone (i.e. stationary case) for the problem given in Example <ref>. min( -(1/2) σ^2 u” + ρ x u' + r u,u-ũ-ψ)=0, x∈ℝ,whereψ(x)=x/(r + λ) - 0.001.and r, ρ, σ and λare defined in the previous Example. Using (<ref>) and (<ref>), the exact solution can be writtenasu(x)=e^5 x^2-5 k^*^2/√(5 π)(440 k^*^2+44) E_3/2(5 x^2)x≤ 0, 2(√(5 π) x (erf(√(5) x)+1)+e^-5 x^2) x<0 && x≤ k^*,E_3/2(5 x^2)+√(5 π)(440 k^*^2+44)/e^5 x^2-5 k^*^2(x/11-1/10)x>k^*,whereE_n(x)=∫ _1^∞e^-tx/t^ndt, erf(x)=2/√(π)∫ _0^x e^-t^2d tand the point k^*=0.107028 is the free boundary point and is the unique real root of the following polynomial 10000 k^*^3 - 110 k^*^2 -11=0.InFigure <ref> the exact solution u(x),the obstacle function ũ+ψ and the free boundary point k^* are presented. In Figure <ref> the difference between exact solution of the stationary problem and the numerical solution (with the use of 50× 50 discretization points and after 5 iterations steps) of the problem (<ref>) at times T=0.5, T=1, T=3 are shown. § ACKNOWLEDGMENTSThis publication was made possible by NPRP grant NPRP 5-088-1-021 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors.§ CONFLICT OF INTERESTSThe authors declare that there is no conflict of interests regarding the publication of this paper.ieeetr
http://arxiv.org/abs/1704.08490v1
{ "authors": [ "Avetik Arakelyan", "Rafayel Barkhudaryan", "Henrik Shahgholian", "Mohammad M. Salehi" ], "categories": [ "math.NA", "35R35, 35D40, 65M06, 91G80" ], "primary_category": "math.NA", "published": "20170427094530", "title": "Numerical treatment to a non-local parabolic free boundary problem arising in financial bubbles" }
firstpage–lastpage 2016Face Identification and Clustering Atul Dhingra==================================We used the Gemini Multi-Object Spectrograph (GMOS) Integral Field Unit (IFU) to map the gas distribution, excitation and kinematics within the inner kiloparsec of four nearby low-luminosity active galaxies: NGC3982, NGC4501, NGC2787 and NGC4450. The observations cover the spectral range 5600–7000 Å at a velocity resolution of 120and spatial resolution ranging from 50 to 70 pc at the galaxies. Extended emission in , [N ii]λλ6548,6583, [S ii] λλ6716,6730 over most of the field-of-view is observed for all galaxies, while only NGC3982 shows [O i] λ6300 extended emission. Theequivalent widths (W_Hα) combined with the [N ii]/Hαline ratios reveal that NGC3982 and NGC4450 harbor Seyfert nuclei surrounded by regions with LINER excitation, while NGC2787 and NGC4501 harbor LINER nuclei. NGC3982 shows a partial ring of recent star-formation at 500 pc from the nucleus, while in NGC4501 a region at 500pc west of the nucleus shows LINER excitation but has been interpreted as an aging Hiiregion with the gas excitation dominated by shocks from supernovae. The line-of-sight velocity field of the gas shows a rotation pattern for all galaxies, with deviations from pure disk rotation observed in NGC3982, NGC 4501 and NGC 4450. For NGC4501 and NGC4450, many of these deviations are spatially coincident with dust structures seen in optical continuum images, leading to the interpretation that the deviations are due to shocks in the gas traced by the dust. A speculation is that these shocks lead to loss of angular momentum, allowing the gas to be transferred inwards to feed the AGN. In the case of NGC2787, instead of deviations in the rotation field, we see a misalignment of 40^∘between the orientation of the line of nodes of the gas rotation and the photometric major axis of the galaxy. Evidence of compact nuclear outflows are seen in NGC4501 and NGC4450. galaxies: individual (NGC 3982, NGC 4501, NGC 2787, NGC 4450) – galaxies: Seyfert – galaxies: kinematics and dynamics § INTRODUCTION Understanding how mass is transferred from galactic scales down to nuclear scales to feed the super-massive black hole (SMBH) in the nuclei of galaxies has been the goal of many theoretical studies and simulations<cit.>. These works have shown that non-axisymmetric potentials efficiently promote gas inflow toward the inner kiloparsec of galaxies <cit.>, resulting in a gas reservoir that can trigger and maintain an Active Galactic Nucleus (AGN) and/or nuclear star formation.Nuclear bars and associated spiral arms are indeed frequently observed in the inner kiloparsec of active galaxies <cit.>. <cit.> found a strong correlation between the presence of the nuclear dusty structures (filaments, spirals, discs and bars) andnuclear activity in a sample of early-type galaxies, suggesting that a reservoir of gas and dust is a necessary condition for a galaxy to harbor an AGN. This correlation between the presence of dusty structures and nuclear activity supports the hypothesis that these structures represent a fueling mechanism for the SMBH, allowing the gas to loose angular momentum and stream towards the center of the galaxies. Previous studies by our group <cit.>, have revealed kinematicfeatures associated with nuclear spirals, bars or filaments, that are consistent with gas inflowto the inner tens of parsecs of active galaxies. Motivated by these results, we have mapped the gaseous kinematics of four nearby AGNs showing dusty nuclear spirals,with the goal of looking for correlations between these spirals and the gas kinematics. The galaxies NGC 3982, NGC 4501, NGC 2787 and NGC 4450 were selected from the work by <cit.>, that was based mostly on low-luminosity AGNs. This study is part of a larger project in which we are obtaining optical integral field spectroscopic observations of a complete X-ray selected sample with the aim of investigating feeding and feedback mechanisms over a range in AGN luminosity <cit.>. This work is organized as follows: Section 2 presents a description of the observations and data reduction procedure, Sec. 3 shows the emission-line flux distributions, line-ratio maps, velocity fields and velocity dispersion maps. In Sec. 4 we model the velocity fields and in Sec. 5 we discuss the results for each galaxy. Finally, in Sec. 6, we present the main conclusions of this work.§ OBSERVATIONS AND DATA REDUCTION As pointed out above, the four active galaxies of this study were selected from the sample of <cit.> by showing dusty nuclear spirals in Hubble Space Telescope (HST) optical images through the filter F606W, and revealed in“structure maps” that are aimed to enhance fine structural features in single-filter images <cit.>. The observations were obtained using theGemini Multi-Object Spectrograph Integral Field Unit <cit.> at the Gemini North Telescope in 2007, 2008 and 2011.We have observed the wavelength range 5600-7000 Å, which includes the strongest emission lines, as Hα, [N ii] λλ 6548,6583,[S ii] λλ 6716,6730and [O i] λ6300, using the IFU in the two slit mode. The R400 grating was used in combination with the r(530 nm) filter, resulting in a spectral resolution of 2.5-2.7 Å, as obtained from the full-width at half maximum (FWHM) of arc lamp lines used to wavelength calibrate the spectra, translating to ∼100-125 km s^-1 in velocity space.The data comprise three adjacent IFU fields for NGC 3982, NGC 4501 and NGC 4450, each one covering 5^'' × 7^'', while for NGC 2787 we used two adjacent IFU fields. In order to remove cosmic rays and bad pixels small spatial offsets were performed between individual exposures at each position. The final Field of View (FoV), obtained after mosaicking the individual cubes for each galaxy are approximately 7^''×15^''for NGC 3982 and NGC 4501, 7^''×9^'' for NGC 2787 and 20^''×5^'' for NGC 4450, with the longest dimension of the FoV oriented along the major axis of each galaxy. The total exposure time for each galaxy ranges from 75 to 82 minutes. Table <ref> shows the log of the observations, as well as relevant information on each galaxy.Data reduction was performed using the IRAF[IRAF is distributed by National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation] packages provided by the GeminiObservatory and specifically developed for the GMOS instrument.First, the bias was subtracted from each image, followed by flat-fielding and trimming of the spectra. The wavelength calibration was applied to the science data using as reference the spectra of Arc lamps, followed by subtraction of the underlying sky emission. To obtain a relative flux calibration we constructed a sensitivity function using the spectra of standard stars provided by the observatory as default calibrations. Feige 66 was used as standard star for NGC 4501, while Feige 34 was used for NGC 2787 and BD+28d4211 was used for NGC 3982 and NGC 4450.Finally, separate data cubes were obtained for each exposure, which were then aligned and median combined to a single cube for each galaxy.All individual cubes were created with 005 square spaxels during the data reduction, but for NGC 4450 the final data cube was rebinned to 015 square spaxels in order to increase the signal to noise ratio of the spectra and allow the fitting of the emission-lines profiles at locations away from the nucleus.The spatial resolution of the finaldata cubes, presented in Table <ref>, are inthe range 50–70 pc and were obtained from average FWHM of the flux distribution of field stars seen in the acquisition image at the r band, and adopting the distances quoted in the forth column of the same table. The uncertainty in the spatial resolution is about 10 % for all galaxies, estimated as the standard deviation of the average FWHM.§ RESULTS Figure <ref> shows a large scale image and typical spectra for each galaxy. At the top-right corner of each image, we show as an insert the structure map of the inner 20^''×20^''.The orientation is shown by the arrows in the top-left corner of each image. The central box at the HST structure map shows the IFU FoV. These structure maps were constructed following <cit.> using HST broad band images obtained through the F606W filter <cit.> and are aimed to highlight dust structures present in the inner region of the galaxies.The structure maps reveal dust structures for all galaxies: NGC 3892 clearly shows dust spiral structures from 10^'' down to the nucleus; NGC 4501 shows much more dust to the northeast than to the southwest; NGC 2787 shows elliptical dusty partial rings that seem to be concentric and NGC 4450 presents a complex dust distribution with more dust to the east of the nucleus, and a dusty blob to the northwest.The right panels of Fig <ref> show two spectra for each galaxy obtained within a circular aperture of 025 radius (corresponding to 5 pixels) for NGC 3982, NGC 4501 and NGC 2787, and 045 radius (corresponding to 3 pixels) for NGC 4450. The first spectrum correspond to the nucleus and the other were obtained for the position labeled as “A" in the flux distributions maps of Figures <ref>–<ref>, chosen to represent typical extra-nuclear spectra. The strongest emission lines are identified. §.§ MeasurementsIn order to measure the emission-line flux distributions and gas kinematics, we fitted the line profiles of Hα+[N ii] λλ 6548,6584,[S ii] λλ 6717,6731 and [O i]λ6300 by single Gaussian curves using a modified version of the profit routine <cit.>. This routine performs the fit of the observed profile using the MPFITFUN routine <cit.>, via a non-linear least-squares fit.In order to reduce the number of free parameters, we adopted the following constraints: the [N ii]+Hα emission lines were fitted by keeping tied the kinematics of the [N ii] lines and fixing the [N ii]λ6563/[N ii]λ6548 intensity ratio to its theoretical value (3). The [S ii] doublet was fitted by keeping the kinematics of the two lines tied, while the [O i] λ6300 was fitted individually with all parameters free.For the Hα line all parameters were allowed to vary independently. In all cases, the continuum emission was fitted by a linear equation, as the spectral range of each line fit was small. NGC 4450 presents a known broad double-peaked Hα line <cit.>, seen also in the nuclear spectrum of our GMOS data (Fig. <ref>). To take this broad double-peaked emission into account during the fit, we have included two additional broad components at locations closer to the nucleus than 1^''. As the Broad Line Region is not resolved, the width and central wavelength of each broad component were kept fixed to the values obtained by fitting the nuclear spectrum shown in Fig. <ref>, while the amplitude of each Gaussian was allowed to vary during the fit. The fitting routine outputs a data cube with the emission-line fluxes, gas velocity, and velocity dispersion, as well as their corresponding uncertainties and χ^2 map. These cubes were used to construct two dimensional maps for these physical properties, presented in Figures <ref> - <ref>, for NGC 3982, NGC 4501, NGC 2787 and NGC 4450, respectively, together with the HST structure map, an excitation diagram, excitation map and electronic density. §.§ WHAN diagram and excitation map In order to map the gas excitation, line-ratio diagnostic diagrams are frequently used, the most popular of them being the BPT diagrams <cit.>. Integral field spectroscopy allows the construction of two-dimensional diagnostic diagrams <cit.>. As our observations do not cover the λ5007 and Hβ lines, we use an alternative diagnostic diagram recently proposed by <cit.>, that makes use only of theand λ65863 emission lines. This diagram plots theequivalent width against the [N ii]/Hα line ratio, and is usually referred to as the WHAN diagram <cit.>.The WHAN diagram allows the separation of Starbursts, Seyfert galaxies (or strong AGN: sAGN), weak AGN (wAGN, defined as having Ha equivalent widths W (Hα) between 3Å and 6Å), and retired galaxies (RGs), that is galaxies having W(Hα) smaller than 3Å, not active, but displaying weak emission lines produced by radiation from post-AGB stars. A particular advantage of the WHAN diagram is to allow the separation between wAGN and RG, that overlap in the LINER region of traditional BPT diagnostic diagrams.The spatially resolved WHAN diagnostic diagrams for each galaxy are shown in the top-central panels of Figures <ref>–<ref> and the top-right panels show the resulting excitation maps: distinct regions of the FoV color-coded according to the excitation derived from the WHAN diagram. White regions in the excitation maps correspond to locations where we could not fit one or both emission lines. The color bar shows the identification of the distinct excitation classes. The excitation map for NGC 3982 shows sAGN values within the inner 1^'' (82 pc), while SF excitation regime is observed in a ring at 4-6^'' from the nucleus and a mixture of RG, sAGN and wAGN is observed in between these regions. The excitation map of NGC 4501 and NGC 2787 showunresolved regions of wAGN excitation at their nuclei, surrounded by RG excitation regions. A similar behavior is observed for NGC 4450, but showing sAGN values at the nucleus. In addition, NGC 4501 presents sAGN values within an unresolved region at ∼6 west of the nucleus (close to the border of the FoV). §.§ Flux distributions The flux distributions in theλ6583 emission line for each galaxy is shown in the middle-left panels of Figures <ref>–<ref>. Black regions represent masked locations where the uncertainty in the flux is larger than 30%,and we were not able to get good fits of the line profiles due to the low S/N ratio or non-detection of the corresponding emission line. Similar maps for the ,andemission lines are shown in appendix A (Figures<ref>–<ref>). The [O i] λ6300 Å flux maps show extended emission only for NGC 3982 and we thus do not show the corresponding flux maps for the other galaxies. All galaxies present the gas emission peak at the nucleus for all emission lines.The NGC 3982flux maps forandshow extended emission over the whole GMOS-IFU FoV (up to 574 pc (7^'') from the nucleus), with a partial ring of enhanced gas emission surrounding the nucleus at 328 - 492 pc (4-6^'') from it, attributed to star forming regions. Theflux distribution also shows emission associated to the ring, but most of theemission is concentrated within r=164-328pc (2-4^'') from the nucleus. Theemission is observed only within r=123pc (1.5^'') from the nucleus. NGC 4501 presents extended emission for and up to 486 pc (6^'') from the nucleus. Theflux map is more concentrated, with emission seen only within the inner 162 – 243 pc (2-3^'').All maps show the most extended emission along the northwest-southeast direction, approximately along the the major axis of the galaxy as seen in Fig. <ref>. In addition, themap presents a small region with enhanced emission at ∼486 pc (6^'') south-west of the nucleus attributed to anregion.Extendedandemission over the whole FoV – up to 315 pc (5^'') from the nucleus is observed for NGC 2787, while theemission is more concentrated within the inner 126 – 189 pc (2-3). For NGC 4450, the highest intensity levels show flux distributions slightly more elongated along the east-west direction, approximately perpendicular to the orientation of the major axis of the galaxy. At locations beyond 162 – 243 pc (2-3^'') from the nucleus, theandemission show two spiral arms, one to the north and another to the south of the nucleus, extending to up to 810 pc (10) from it. §.§ Velocity fieldsThevelocity field for each galaxy is presented in the mid-central panels ofFigures <ref>–<ref>, with white regions corresponding to masked locations due to bad fits. Similar maps for the ,andemission lines are shown in Figures<ref>–<ref>.Theandvelocity fields for NGC 3982present a rotation pattern with blueshifts observed to the north (thus this side is approaching) and redshifts to the south (and this side is receding), with a projectedvelocity amplitude of ≈ 100 km s^-1.NGC 4501 also presents velocity fields consistent with gas rotating in the plane of the galaxy with blueshifts to the west and redshifts to the east, also showing deviations from pure rotation indicating the presence of non-circular motions at some locations. The velocity fields for all lines are similar, with deviations from rotation including excess redshifts at ∼162 pc (2) southwest of the nucleus, along the minor axis of the galaxy (where the velocities reach values of up to 150 ) andexcess redshifts in a region marginally resolved at ∼486 pc (6) west of the nucleus. The origin of these structures will be further discussed in Sec. <ref>. The velocity fields for NGC 2787 are consistent with pure rotation in a disk oriented along position angle PA∼50/230^∘ east of north, with blueshifts to the southwest and redshifts to the northeast and a high projected velocity amplitude of ∼250 . NGC 4450 also presents velocity fields indicating rotation in the disk of the galaxy, with blueshifts to the south and redshifts to the north of the nucleus, with a projected velocity amplitude of ∼150 . In addition to the rotation pattern, excess blueshifts of up to -150 are observed at 1 east of the nucleus, in a region comparable in size to the spatial resolution of our data. §.§ Velocity dispersion maps Thevelocity dispersion map for each galaxy is presented in the mid-right panel of Figures <ref>–<ref>, with black regions corresponding to masked locations due to bad fits. Similar maps for the ,andemission lines are shown in Figures<ref>–<ref>. NGC 3982 shows σ values ranging from ∼50to ∼130 , with the highest values observed within 82 pc from the nucleus and the smallest values in the partial circumnuclear star forming ring at 328 - 492 pc (r ≈4–6) from the nucleus. It can also be noticed that the forbidden lines show slightly larger σ values than , suggesting that they trace emission from kinetically “hotter" gas. The σ values for NGC 4501 are higher than ∼120over a large part of the FoV, up to distances of 324 pc(4) from the nucleus. Higher values of ∼150are observed in a small patch (89.1 pc×153.9 pc) at 121.5 pc (15) south of the nucleus. This region is co-spatial with excess redshifts observed in the velocity fields. On the other hand, very small σ values (∼30-50) are observed at 486 pc (6) west of the nucleus, where another region of excess redshifts is observed in Figs. <ref> and <ref>. As for NGC 3982, the forbidden lines show an average σ value larger than those of .NGC 2787 shows the highest σ values of up to 150 within 63 pc from the nucleus for all emission lines. Outside the nucleus there is an asymmetry in the distribution of σ values: while to the south of the nucleus the lowest values of ∼30-60are observed, to the northeast of the nucleusσ≥120 . For NGC 4450the highest σ values for all emission lines are observed within 81 pc from the nucleus.Theandshow σ∼100 , while some higher values (∼200 ) are observed for theat these locations. The smallest σ values are σ∼50-70 , observed in the spiral arms.§.§ Electron densityThe ratio of the fluxesλ6716/λ6731 was used to obtain the electron density N_e using the temden routine of the nebular package from stsdas/iraf, assuming anelectron temperature for the ionized gas of 10 000 K. The bottom left panels of Figures <ref> - <ref> show the gas electron density distribution for all galaxies. The highest electron density values of about 3000 cm^-3 are found within the inner 82 pc (1^'') of NGC 3982. For the other galaxies the density values range from 600 to 1500 cm^-3, and are similar to thoseobtained insimilar studies of the inner kiloparsec of active galaxies <cit.> §.§ Stellar kinematics In order to obtain measurements for the Line-of-Sight Velocity Distribution (LOSVD) of the stars we used thePenalized Pixel Fitting (pPXF)routine of <cit.> to fit stellar absorptions in the spectral range from 5600 – 6900 Å. The fitting of the galaxy spectra is done by using template spectra under the assumption that the LSVD of the stars is well reproduced by Gauss-Hermite series. We used selected simple stellar population synthetic spectra from the <cit.> models, which have similar spectral resolution to those of our GMOS data <cit.>.The corresponding stellar velocity field and velocity dispersion (σ_*) maps for each galaxy are shown in the bottom-central and bottom-right panels of Figures <ref>–<ref>.White/black regions in the velocity/velocity dispersion maps correspond to masked locations were the uncertainties of the measurements are larger than 50 km s^-1. The stellar velocity fields of all galaxies show signatures of rotation. For NGC 3982, the stellar kinematic mapsare very noisy, butthe stellar velocity field shows a similar trend of rotating disk as observed for the gas, with blueshifts observed to the northeast of the nucleus and redshifts to southwest of it. We were able to measure the stellar kinematics only within the inner2^'' of NGC 4501, which shows a clear rotation pattern with blueshifts observed to the northwest of the nucleus and redshifts to southeast of it, with a velocity amplitude of ∼150 . A similar velocity amplitude is observed for NGC 2787 with blueshifts observed to the west and redshifts to the east, showing a similar signature of rotation as observed in the gas velocity field. The orientation of the line of nodes of the gas and stars seems to be misaligned by 30–40^∘. However, we were not able to fit the stellar absorptions at most locations to the south of the nucleus of this galaxy, possible due to the larger extinction at this side, as seen in the structure map (top-left panel of Figure <ref>). NGC 4450 shows a clearrotating disk pattern with the line of nodes oriented along the north-south direction. The deviations from pure rotation seen in the gas velocity field arenot observed in the stellar velocity field of this galaxy. The stellar velocity dispersion map of NGC 3982 shows values smaller than 100at most locations, being smaller than that observed for the gas at the same locations. NGC 4501 shows σ_* values overall larger than those observed for theemitting gas and a partial low-σ_* ring (σ_*<100 ) is observed surrounding the nucleus at 1^''. Similar structures have been observed for other active galaxies and attributed to being originated from intermediate age (100 Myr – 2 Gyr) stellar populations <cit.>.For NGC 2787, the σ_* map shows values larger than 150  at most locations, suggesting that although the stellar velocity field shows a clear rotation pattern, the stellar motions are dominated by random orbits, instead of ordered rotation in the plane of the disk. Finally, NGC 4450 shows σ_*values in the range from 70 to 130 , similar to that observed at extra-nuclear regions for the gas. § ROTATION DISK MODELAs seen in Figs. <ref>–<ref>, both the gas and the stellar velocity fields for all galaxies show a rotation pattern, with the gas presenting some deviations from pure rotation due to non-circular motions. Here we model only the gas velocity fields with a rotation model due to the fact that the stellar velocity fields are much noisier but present similar rotation patterns. We used a simple analytical model, assuming that the gas has circular orbits at the plane of the galaxy<cit.>, as done in previous works by our group <cit.>. The expression for the circular velocity is given byV_ mod(R,Ψ)=υ_s+ ARcos(Ψ-Ψ_0)sin(i)cos^p(i)/{R^2[sin^2(Ψ-Ψ_0)+cos^2(i)cos^2(Ψ-Ψ_0)]+C_o^2cos^2(i)}^p/2,where A is the velocity amplitude,Ψ_o is the position angle of the line of nodes, C_o is a concentration parameter, defined as is the radius where the rotation curve reaches 70 % of the velocity amplitude, i is the disc inclination in relation to the plane of the sky (i = 0 for face-on disc), R is the radial distance to the nucleus projected in the plane of the sky with the corresponding position angle Ψ, and υ_s is the systemic velocity of the galaxy. The parameter p measures the slope of the rotation curve where it flattens, in the outer region of the galaxy and it is limited between 1 ≤ p ≤ 3/2. For p = 1 the rotation curve at large radii is asymptotically flat while for p = 3/2 the system has a finite mass.We used an IDL[http://www.harrisgeospatial.com/ProductsandSolutions/Geospatial Products/IDL.aspx] based routine to fit the above equation to the observed λ6583 velocity fields using the MPFITFUN routine <cit.> to perform the non-linear least-squares fit, where initial guesses are given for each parameter and the routine returns their values for the best fitted model. As all lines show similar velocity fields, we have chosen thekinematics to perform the fit, as the λ6583 is the strongest line observed at most locations for all galaxies. During the fit, the location of the kinematical center was fixed to the position of the peak of the continuum emission and the parameter p was fixed to p=1.5.In Figures <ref>–<ref> we show thevelocity field (left panel), resulting model (middle left panel), the residual map (middle right panel) and a structure map (right panel) for NGC 3982, NGC 4501, NGC 2787 and NGC 4450, respectively, while the resulting fitted parameters are shown in table <ref>. The values of the disk inclination and the amplitude of the rotation curve are coupled, meaning that somewhat higher or lower inclinations give similar fits for corresponding somewhat smaller and larger amplitudes, respectively. A similar coupling is observed between i and c_0 and these values should thus be considered with caution.§ DISCUSSION In this section we present a small review of what is already known for each galaxy of our sample and discuss our results in comparison (or addition) to those of these previous works. §.§ NGC3982 NGC 3982 (or UGC 6918) is part of the Ursa Major cluster. Optical and near-infrared broad band images reveal the presence of a small nuclear bar extending by 10 along PA∼30^∘ east of north, while at larger scales it displays a multi-armed spiral pattern, seen in a B-I color index map andnarrow band images <cit.>. Multi-spiral arms are also seen at small scales, as revealed by the HST F606W image <cit.>. At radio wavelengths,a weak elongated feature of 4 kpc length is observed along the north-south direction in 6 cm, plus unresolved nuclear emission, probably due to the AGN <cit.>. The radiation field of the AGN seems also to heat the dustclose to the nucleus, as revealed by a very red near-IR color <cit.>. In addition, the mass of molecular gas is larger than the stellar mass within the inner 1^'× 1^' <cit.>. The WHAN diagram of NGC 3982 (Fig. <ref>) confirms the signature of a strong AGN (sAGN) at the nucleus within the inner arcsec (82pc) in agreement with the Seyfert 2 classification <cit.>. This region also presents a higher velocity dispersion (σ≥ 130 ) than its surroundings (σ≤ 80 )– see Fig. <ref>, which could suggest the presence of a mild unresolved nuclear outflow. In addition, strong AGN signatures are also observed at about 328pc (4^'') from the nucleus, at locations that surround star forming regions in the ring. We attribute this signature toan underlying young/intermediate age stellar population with strong Hα absorption leading to an underestimate of theflux and thus to a large / line ratio. This could also explain the weak AGN signatures at the surrounding regions, where thefluxes are weak. Another possibility to explain this emission, could be the presence of post-AGB stars as already claimed to explain the LIER emission in large samples of galaxies <cit.>.The star forming regions identified in the flux maps and already known <cit.> are properly classified as “Starburst" by the WHAN diagram, although some “noise" is observed. Large scale IFU observations show that the stellar and gas velocity fields are consistent with regular rotation <cit.>, while our small scale gas kinematics suggest the presence of deviations from pure rotation, as observed in Fig. <ref>.By comparing the fitted parameters from the rotation disk model (Table <ref>) with those found in the literature, we observe a large discrepancy among the values for the large scale disk and ours. The major axis of the galaxy is oriented along Ψ_0=25.1^∘, as quoted in the Hyperleda database <cit.>.<cit.> used observations at 8μm and 500μmwith the Herschel telescope and found that the major axis of the mid infrared emission is oriented along Ψ_0=138^∘ which is 40 degrees smaller than the 173±2derived from our internal gas kinematics, while <cit.> presentand stellar kinematics measured for an 1^' FoV fromPPak integral field spectroscopy and found Ψ_0=191.6^∘±0.5^∘, which is about 20^∘ larger than ours. These discrepancies are probably due to the distinct FoV of the observations and the complex gas motions present within the inner few hundred parsecs of the galaxy. The systemic velocity of the galaxy obtained here is in good agreement with that of <cit.> and quoted in the Hyperleda database. §.§ NGC4501 NGC 4501 belongs to the Virgo cluster.High-resolution interferometer observations of the ^12CO (J=1-0) emission in the central 5 kpc using the Nobeyama Millimeter Array show two molecular gas structures: (i) a nuclear concentration within the inner 5 with mass of 1.3×10^8 M_⊙, showing non-circular motions and (ii) large scale spiral arms, along which molecular gas streaming motions towards the center are observed <cit.>. Multiple large scale spiral arms and two small scale arms are observed in the B-band <cit.> and K-band <cit.> images. WFPC2 F606W HST images reveal nuclear dust spirals <cit.>.<cit.> used near-infrared integral field spectroscopy of the inner 3×3 of NGC 4501 obtained with the SINFONI instrument on the Very Large Telescope (VLT) to map the H_2 2.12 μ m emission. They found that the H_2 flux distribution shows two components: an asymmetric nuclear component surrounded by two arcs, one to the northwest and the other to the southeast of the nucleus, that seem to be correlated with nuclear dust lanes seen in an WFPC2 F547M HST image and located inside edges of two peaks seen in CO image from <cit.>, indicating that the warm molecular gas traced by the H_2 line emission is closer to the nucleus than the cold molecular gas. In addition, <cit.> report than the spectra of NGC 4501 show no sign of ionized gas emission (e.g. Brγ). The stellar kinematics of the inner 3×3 shows regular rotation with the line of nodes oriented along PA=140^∘ east of north, while the H_2 kinematics shows additional non-circular motions <cit.>. The authors analyze the H_2 kinematics based on a residual map, constructed by subtracting a rotating disk model from the observed velocity field. This map shows two main structures: one observed in blueshift to the northwest of the nucleus, co-spatial with a dust spiral arm, and another to the southwest of the nucleus seen in redshift. <cit.> interpret these structures as being due to outflows from the nucleus.Recently, <cit.> used the same GMOS-IFU data of NGC 4501 used in this work (from Gemini archive) to study the gas kinematics and stellar populations. However, their analysis is presented only for the inner 64×54 nuclear region. The velocity field and FWHM map for the [N ii]λ6583 emission line shown in <cit.> is consistent with ours, although their maps are much “noisier" than ours. <cit.> found that old stellar populations dominate the continuum emission from the inner region of the galaxy. They conclude that the gas kinematics is dominated by non-circular motionsand reproduced by an exponential disk model, with a maximum expansion velocity of 25 km s^-1 and major axis along Ψ_0≈ 137^∘. The authors argue also that the kinematics of the NaDλ5892 is consistent with outflowing material from the center of NGC 4501 along two inner pseudo-spiral arms.Our orientation for the line of nodes of the gas velocity field is also in good agreement with that quoted in the Hyperleda database (Ψ_0=138^∘) for the large scale disc, as well as with those obtained for the central region from stellar kinematics of the inner 3^''×3^'' <cit.>and Hα velocity field of the inner64×54<cit.>, while the systemic velocity is about 25smaller than that obtained from the optical measurements and quoted in Hyperleda database.NGC 4501 is cataloged as harboring a Seyfert 2 nucleus <cit.>, while according to our WHAN diagram its nuclear emission corresponds to a "weak AGN". X-ray emission from the nucleus of this galaxy gives a low luminosity of L_2–10 KeV≈ 3.5 × 10^39 erg s^-1 <cit.>. A similar luminosity is also observed for the λ5007 emission line, L_OIII≈ 9.6× 10^39 erg s^-1 <cit.>, while strong AGNs usually have largerluminosities. In addition, the nucleus of NGC 4501 falls at the region between Seyferts and LINERs in the λ5007/Hβ vs. λ6583/ diagnostic, as shown in <cit.>. Thus, our observations suggest that NGC 4501 nuclear activity is better classified as LINER, instead of Seyfert 2.An intriguing feature is seen at ∼6 west of the nucleus, where the WHAN diagram shows AGN values (very close to the corner – in the WHAN diagram – that separates Starburts, strong and weak AGNs). We attribute this apparent sAGN excitation as being due to the presence of a young stellar cluster, as indicated by a slightly enhancement in theflux map of Fig. <ref>, and possible shocks due to supernovae explosions, that would enhance the / line ratio <cit.>. Assuming theemission is originated by gas photo-ionized by young stars, and using the photo-ionization models from <cit.> and the observedequivalent width for position A, we estimate an age of >10 Myr, which is consistent with the presence of evolved stars. §.§ NGC2787Using long-slit spectra obtained with HST STIS at parsec-scale resolution, <cit.> derived the mass of the SMBH as being about 10^8 M_⊙, by modeling the gas kinematics.HST and ground based broad band images show a very complex morphology comprising a large inner disk, a nuclear bar oriented along PA∼-20^∘and an off-plane dust disk in the central regions <cit.>.Inside the bar, the inner disk is tilted relative to the orientation of the stellar distribution<cit.>, while the large scale H i distribution is alsofound to be misaligned relative to the optical emission, suggesting the presence of a dark halo <cit.>. Chandra 0.5–1.5 keV observations show unresolved nuclear emission consistent with a stellar origin, with only a small contribution from hot gas <cit.>. So far, there are no studies available about the gas and stellar kinematics of the central region of NGC 2787.NGC 2787 shows a WHAN diagram (Fig. <ref>) with wAGN signature observed at the nucleus and RG signatures else where, confirming that its nuclear activity is of the LINER type <cit.>, with a weak broad Hα component observed at the nuclear spectrum (Fig. <ref>), already previously detected <cit.>. The systemic velocity for NGC 2787 derived in Sec. <ref> is in reasonable agreement with that presented at Hyperleda database (υ_s=606±40 ), while the orientation of the line of nodes that we have obtained is 37^∘ smaller than the PA of the major axis of the large scale disk listed in Hyperleda. On the other hand, it is known that NGC2787 present a complex structure in the central region <cit.> and thus a misalignment between the large and small scale disk can be expected. Indeed, the PA of the line of nodes we have derived for the circumnuclear gas kinematics of 100^∘ is consistent with the orientation of the nuclear bar in HST images <cit.>.Another feature observed in the gas velocity field is an increase in the velocity dispersion at the nucleus extending to ≈2^'' (126 pc) to the south of the nucleus, which could indicate a mild AGN outflow. §.§ NGC4450NGC 4450 is an anemic spiral galaxy in the Virgo cluster.Its nuclear spectrum shows a weak broad double-peakedprofile, interpreted as the signature of the outer parts of arelativistic accretion disk <cit.>.Fabry-Perot observations at a seeing of ∼1.5^'' and a field of 1.8^' reveal a patchydistribution and a perturbed velocity field with the orientation of the line of nodes along PA=351^∘±9^∘ east of north and with a steep velocity gradient of 200 around the nucleus <cit.>. Similar distribution is observed in H i emission <cit.>. Interferometric observations show only weak CO emission and a total mass of cold gas of ∼10^9M_⊙ is observed for the whole galaxy <cit.>. The nuclear region of NGC 4450 shows two long dusty spirals in the main disk along with some flocculent structures, but with no star formation associated to the dusty spirals <cit.>.<cit.> present optical IFU observations of the inner 20×40 of NGC 4450. They found that the stellar velocity field is dominated by rotation, but very perturbed, with the orientation of the line of nodes changing from 175^∘ at the center to 160^∘ at 15-25 of the nucleus. Thevelocity field is misaligned relative to the stellar kinematics, with the orientation of the line of nodes along PA∼190^∘ east of north. <cit.> interpret this misalignment as being due to non-circular motions or due to emission of gas located in a tilted gas disk relative to the stellar disk, produced by an accretion event or minor merger.The orientation of the line of nodesderived in Sec. <ref> (Ψ_0=192^∘) is in good agreement with that of the large scale disk <cit.>, and consistent with the orientation of the line of nodes observed for theemission at the inner 20 <cit.>. A knot of residual blueshift (after the subtraction of the rotation model) at ≈ 1^'' (81 pc) to the east of the nucleus (the near side of the galaxy) and residual redshift at a similar distance to the west (the far side of the galaxy) possibly indicate the presence of a nuclear outflow in the east-west direction (direction of the largest extent of the [NII] flux map). However it should be noticed that the size of the structures seen in blueshifts and redshifts in the residual maps are comparable to the spatial resolution of our data.Another feature of the gas kinematics that our measurements revealed is an increase in the [NII] velocity dispersion at the nucleus,which could be due to a previous plasma ejection related to the compact nuclear outflow. Alternatively, the larger velocity dispersion at the nucleus could also be due to unresolved rotation. Our WHAN diagram forNGC 4450shows strong AGN features in the inner 05, surrounded by a ring of weak AGN signature. NGC 4450 was previously cataloged as harboring a LINER nucleus <cit.>, but it presents stronger X-ray emission than NGC 4501, originally classified as Seyfert and with a WHAN diagram indicating that Seyfert is better classification for its nuclear emission. <cit.> presentsa 0.3-8 keV flux of 1.19×10^-12 erg s^-1, which corresponds to a luminosity ofL_0.3-8KeV≈ 3.6 × 10^40 erg s^-1, that is one order of magnitude larger than the values observed for NGC 2787 and NGC 4501. On the other hand, NGC 4450 shows a low λ5007 luminosity <cit.>, smaller than commonly observed in sAGNs.§.§ Residual gas velocity maps vs. nuclear spirals The velocity residual maps (obtained as the difference between the observed velocity fields and the rotation model)for NGC 4501 and NGC 4450 show that many of the kinematic structures of these maps are spatially correlated with the dust features seen in the structure map. The residual maps are shown in the central panels of Figs. <ref>–<ref>.The residual map for NGC 3982 shows somestructures revealing the presence of non-circular motions, although only a few of these structures are correlated with dust features. For NGC 2787 (Fig. <ref>) the velocity residuals are small at all locations and no systematic structures are seen in the residual map, indicating that the adopted model is a good representation of the observed velocity field, which is dominated by the rotating disk component.In order to better analyze the structures in the velocity residual maps, we assume that the spiral arms observed in the large scale images are trailing to determine the near and far side of the disk,identified in the central panels of Figs. 6–9 that show the rotation model for each galaxy. We also show in these figures the structure maps at the same scale as the kinematic maps in order to verify possible correlations between dust and kinematic structures in the residual maps. This is motivated by previous results from our group in which we have found an association between gas inflows and nuclear spiral arms <cit.>.The residual map for NGC 3982 shows that the highest blueshifts of up to ∼-50are observed to the N-NE of the nucleus in the far side of the galaxy, in a region where the structure map shows a strong dust spiral arm. In addition, some redshifts are also observed associated to the inner part of the same spiral arm, but in the near side of the galaxy. A possible interpretation to this residuals is that they represent inflows of gas towards the nucleus. However, this interpretation should be taken with caution, as residuals of the order of 20–40are observed also at other locations of the galaxy.We can thus only state with certainty that the velocity residuals are correlated with the dust structures seen in the nuclear region. A correlation between the velocity residuals and the dust structures is also observed for NGC 4501, as can be seen by comparing the central-right and right panels of Fig. <ref>.Besides these correlations, a “redshifted blob" is observed at 1–3 SW of the nucleus. A similar structure wasobserved by <cit.> in a residual map for the H_2 kinematics, and interpreted as due to an outflow, together with an arc-shaped blueshifted outflow in the near side of the galaxy. These kinematic structures are also supported by <cit.>. The outflow interpretation for the redshifted blob is supported, in our observations, by an increase in the gas velocity dispersion at the location of the blob, as seen in Fig. <ref> and Fig. <ref>. In addition, some blueshifts to the west (in the far side of the galaxy)and redshifts to the east (in the near side of the galaxy)could be attributed to inflows towards the nucleus, but there are also similar residuals at other locations and again we can only state that these residuals are associated with the dust structures.Finally, for NGC4450, the residual map shows blueshifts of up to –100 , as well as similarly high redshifts. A disturbed kinematics for the gas in the central region of this galaxy has already been claimed by <cit.>, who showed that thekinematics was misaligned relative to the stellar kinematics. Besides the usual correlation between the residuals and the dust structures, blueshifts of up to 150  at 05 NE of the nucleus and some redshifts observed at similar distance to the SW could be interpreted as due to a bi-conical outflow, although this is only a speculation, as other similar residuals are observed at other locations. The presence of a nuclear outflow is also supported by the increased velocity dispersion observed within the inner 1^'' (Fig. <ref>).In summary, the gas kinematics, although dominated by rotation, shows deviations correlated with dust structures. Such structures usually trace shocks in the gas and we speculate that we are probing these shocks that may lead to loss of angular momentum allowing for the gas to move inwards to feed the AGN at the nuclei of the galaxies.§ CONCLUSIONS We have mapped the ionized gas kinematics and flux distributions in the central kiloparsec of NGC 3982, NGC 4501, NGC 2787 and NGC 4450 using GMOS IFS at a velocity resolution of ∼ 120and spatial resolution in the range 50–70 pc.The four galaxies show extended emission for ,andemission lines, whileextended emission was observed only for NGC 3982, to up to 2 from the nucleus. The main conclusions of this work are:* The velocity field of all galaxies are dominated by rotation and reproduced by a disk model, under the assumption that the gas rotates at the plane of the galaxy at circular orbits.* Besides the rotating disk component, the gas in NGC 3982, NGC 4501 and NGC 4450 show non-circular motions, evidenced in the residual (observed velocity – rotation mode) velocity maps. At least for the lattertwo, these residuals are associated with dust features revealed in the structure maps.* The velocity residual map for NGC4501 reveals also a redshifted blob in the far side of the galaxy at 1–2  SW of the nucleus that can be interpreted as due to a nuclear outflow. Possible outflows are also observed in NGC4450 as blueshifts and redshifts within the inner 1 to the northeast and southwest, respectively. * NGC 2787 shows a very regular rotation with the orientation of the line of nodes misaligned by ∼40^∘ relative to the large scale disk. This galaxy is known to show a complex morphology at the central region and the PA of the line of nodes is consistent with the orientation of a nuclear bar. * Theequivalent width (W_Hα) vs. [N ii]/Hα (WHAN) diagrams show a wide range of values, with the nuclear emission of NGC 3982 and NGC 4450 showing a Seyfert signature, while for NGC2787 and NGC4101 a LINER signature is obtained. * NGC 3982 shows a clear circumnuclear star-formation ring surrounding the nucleus at 4-6 , as seen in the flux maps and in the WHAN diagram. * A star forming region was detected at 6 west of the nucleus of NGC 4501. The WHAN diagram shows values typical of Seyfert galaxies for this region and we interpret it as being originated by emission enhanced due to shocks from supernovaeexplosions. * The excitation maps show that the AGN emission is very compact for all galaxies, being unresolved for NGC 4501, NGC 4450 and NGC 4450. § ACKNOWLEDGEMENTS This work is based on observations obtained at the Gemini Observatory,which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with theNSF on behalf of the Gemini partnership: the National Science Foundation (United States), the Science and TechnologyFacilities Council (United Kingdom), the National Research Council (Canada), CONICYT (Chile), the Australian ResearchCouncil (Australia), Ministério da Ciência e Tecnologia (Brazil) and south-eastCYT (Argentina).C.B thanks to CNPq for financial support.R.A.R. acknowledges support from FAPERGS (project N0. 2366-2551/14-0) and CNPq (project N0. 470090/2013-8 and 302683/2013-5).99[Allen, Dopita & Tsvetanov1998]allen98 Allen, M., G., Dopita, M., A., Tsvetanov, Z. 1998, AJ, 493, 571.[Allington-Smith et al.2002]allington-smith02 Allington-Smith, J., Graham, M., Content, R., Dodsworth, G., Davies, R., Miller, B. W., Jorgensen, I., Hook, I., Crampton, D., Murowinski, R., 2002, PASP, 114, 892.[Baillard et al.2011]baillard11 Baillard, A., et al. 2011, A&A, 532, A74.[Balmaverde & Capetti2013]balmaverde13 Balmaverde, B., Capetti, A., 2013, A&A, 549, 114. [Baldwin,Phillips & Terlevich1981]bpt Baldwin, J. A., Phillips, M. M., Terlevich, R., 1981, PASP, 93, 5.[Belfiore et al.2016]belfiore16 Belfiore, F., et al. 2016, MNRAS, 461, 3111.[Bertola et al.1991]bertola Bertola, F., Bettoni, D., Danziger, J., 1991, ApJ, 373, 369.[Brightman & Nandra2008]brightman08 Brightman, M., Nandra, K., 2008, MNRAS, 390, 1241.[Bruzual & Charlot2003]bruzual Bruzual, G. & Charlot, S. 2003, MNRAS, 344, 1000[Caldwell et al.1999]caldwell Caldwell, N., Rose, J. A., & Dendy, K. 1999, AJ, 117, 140[Capelo & Dotti2017]capelo17 Capelo, P. R. & Dotti, M., 2017, MNRAS, 465, 2643.[Cappellari & Emsellem2004]cappellari Cappellari, M., Emsellem, E. 2004, PASP, 116, 138.[Cayatte et al.1990]cayatte90 Cayatte, V., van Gorkom, J. H., Balkowski, C., Kotanyi, C., 1990, AJ, 100, 604.[Carollo et al.1998]carollo98 Carollo C. M., Stiavelli M., Mack J., 1998, AJ, 116, 68[Cid Fernandes et al.2010]cid10 Cid Fernandes, R., Stasińska G., Schlickmann M. S., Mateus A., Vale Asari N., Schoenell W., Sodré L., 2010, MNRAS, 403, 1687[Cid Fernandes et al.2011]cidf Cid Fernandes, R., Stasińska G., Mateus A., Vale Asari, 2011, MNRAS, 413, 1036[Ciesla et al.2014]ciesla Ciesla, L., Boquien, M., Boselli, A., Buat, V., Cortese, L., Bendo, G. .J., Heinis, S., Galametz, M., Eales, S., Smith, M. W. L., Baes, M., Bianchi, S., de Looze, I., di Serego Alighieri, S., Galliano, F., et al. 2014, A&A, 565, 128[Chemin et al.2006]chemin06 Chemin, L., Balkowski, C., Cayatte, V., Carignan, C., Amram, P., Garrido, O., Hernandez, O., Marcelin, M., Adami, C., Boselli, A., Boulesteix, J., 2006, MNRAS, 812, 857.[Colina et al.2015]colina15 Colina L., Piqueras-Lopez J., Arribas S., R. Riffel, Rodriguez-Ardila, Pastoriza, M. G., Storchi-Bergmann T., Alonso-Herrero & Sales D., 2015, A&A, 578, 48.[Comerón et al.2010]comeron Comerón, S., Knapen, J. H., Beckman, J. E., Laurikainen, E., Salo, H., Martínez - Valpuesta, I., Buta, R. J. 2010 MNRAS, 402, 2462[Cortés, Kenney & Hardy2015]cortes Cortés, J. R., Kenney, J. D. P., Hardy, E., 2015, ApJ&SS, 216, 9[Couto et al.2013]couto2013 Couto, Guilherme S., Storchi-Bergmann T., Axon, David J., Robinson, A., Kharb, P., and Riffel, R. A., 2013, MNRAS, 435, 2982 [de Vaucouleurs et al.1980]buta de Vaucouleurs, G., Buta, R. J., AJ, 85, 637[de Vaucouleurs et al.1991]devaucouleurs91 de Vaucouleurs, G., de Vaucouleurs, A., Corwin, H. G. Jr, Buta, R. J., Paturel, G., Fouque, P., 1991, Third Reference Catalogue of Bright Galaxies, Vols 1–3, Springer-Verlag, Berlin[Dicaire et al.2008]dicaire Dicaire, I., Carignan, C., Amram, P., Hernandez, O., Chemin, L., Daigle, O., de Denus-Baillargeon, M.-M., Balkowski, C., Boselli, A., Fathi, K., Kennicutt, R. C., 2008 MNRAS, 385, 553.[Dors et al.2008]dors08 Dors, O. L., Jr., Storchi-Bergmann, T., Riffel, R. A., Schimdt, Alex. A., 2008, A&A, 428, 59.[Elmegreen & Elmegreen1987]elmegreen87 Elmegreen D. M., Elmegreen B. G., 1987, ApJ, 314, 3[Elmegreen et al.1999]elmegreen99 Elmegreen D. M., Chromey F. R., Bissell B. A., Corrado K., 1999, AJ, 118, 2618[Emsellem et al.2003]emsellem Emsellem, E., Goudfrooij, P., & Ferruit, P. 2003, MNRAS, 345, 1297[Emsellem et al.2006]emsellem2006 Emsellem, E., Fathi, K., Wozniak, H., Ferruit, P., Mundell, C. G., Schinnerer, E. 2006, MNRAS, 365, 367[Emsellem et al.2015]emsellem15 Emsellem, E., Renaud, F., Bournaud, F., Elmegreen, B., Combes, F., Gabor, J. M., 2015, MNRAS, 446, 2468[Englmaier & Shlosman2004]engl2004 Englmaier, P., & Shlosman, I. 2004, ApJ, 617, L115[Elmegreen et al.2002]elmegreen Elmegreen , D. M. and Elmegreen, B. G. and Eberwein, K. S., 2002 ApJ, 564, 243[Erwin & Sparke1999]erwin Erwin, P., Sparke L. S., 1999, ASP Conf. Ser. Vol. 182. Galaxy Dinamics - ARutgers Symposium. Astron. Soc. Pac., San Francisco, p. 243[Erwin & Sparke2002]erwin2002 Erwin, P., Sparke L. S., 2002, AJ, 124, 65[Erwin & Sparke2003a]erwin03 Erwin, P., Sparke L. S., 2003, ApJS, 146, 299[Erwin & Sparke2003b]erwin03b Erwin, P., Beltrán, J. C. V., Graham, A. W. and Beckman, J. E., 2003, ApJ, 597, 947[Fathi et al.2006]fathi06 Fathi, K., Storchi-Bergmann, T., Riffel, R. A., Winge, C., Axon, D. J., Robinson, A., Capetti, A., & Marconi, A., 2006, ApJ, 641, L25.[Feltre, Chalrlot & Gutkin2016]feltre16 Feltre, A., Charlot, S., Gutkin, J., 2016, 456, 3354. [Gonzalez-Martin et al.2009]gonzalez-Martin09 Gonzalez-Martin, O., Masegosa, J., Marquez, I., Guainazzi, M., Jimenez-Bailon, E. 2009, A&A, 506, 1107.[Helfer et al.2003]helfer03 Helfer, T. T., Thornley, M. D., Regan, M. W., Wong, T., Sheth, K., Vogel, S. N., Blitz, L., Bock, D. C.-J.,2003, ApJS, 145, 259.[Ho et al.1997]ho97 Ho, L. C., Filippenko, A. V., Sargent, W. L. 1997, ApJ&SS, 112, 315.[Ho1999]ho99 Ho, L. C. 1999, ApJ, 516, 672.[Ho et al.2000]ho00 Ho, L., Rudnick, G., Rix, H-W, Shields, J. C., McIntosh, D. H., Filippenko, A. V., Sargent, W. L. W., Eracleous, M.,2000, ApJ, 541, 1.[Ho & Ulvestad2001]ho01 Ho, L. C., Ulvestad, J.S., 2001, ApJS, 133, 77.[Hook et al.2004]hook04 Hook, I., Jorgensen, I., Allington-Smith, J. R., Davies, R. L., Metcalfe, N., Murowinski, R. G., Crampton, D., 2004, PASP, 116, 425[Kauffmann et al.2003]kauffmann03 Kauffmann, G., Heckman, T. M., Tremonti, C., Brinchmann, J., Charlot, S., White, S. D. M., Ridgway, S. E., Brinkmann, J., Fukugita, M., Hall, P. B., Ivezi, Å., Richards, G. T., Schneider, D. P., 2003, MNRAS, 346, 1055[Kewley, Heisler & Dopita 2001]kewley01 Kewley, L. J., Heisler, C. A., Dopita, M. A., 2001, ApJS, 132, 37.[Kewley et al. 2006]kewley06Kewley, L. J., Groves, B., Kauffmann, G., Heckman, T., 2006, MNRAS, 372, 961[Knapen et al.2002]knapen02 Knapen, J. H., Pérez-Ramírez, D., Laine, S., 2002, MNRAS 337, 808. [Knapen2005]knapen Knapen, J. H. 2005, ApJ&SS 295, 85.[Koopmann, Kenney & Young2001]koopmann01 Koopmann, R. A., Kenney, J. D. P., Young, J., 2001, ApJS 135, 125.[Kraemer et al.2011]kraemer Kraemer, S. B., Schmitt, H. R., Crenshaw, D. M., Meléndez, M., Turner, T. J., Guainazzi M., Mushotzky, R. F., 2011, ApJ 727, 130 [Laine et al.2003]laine Laine S., van der Marel R. P., Rossa J., Hibbard J. E., Mihos J. C., Böker T., Zabludoff A. I., 2003, AJ, 79, 745[Lena et al.2015]lena15 Lena, D. and Robinson, A.,Storchi-Bergmann, T.,Schnorr-Müller, A.,Seelig, T.,Riffel, R. A.,Nagar, N. M.,Couto, G. S. and Shadler, L.,2015, ApJ, 806, 84 [Lena et al.2016]lena16 Lena, D. and Robinson, A.,Storchi-Bergmann, T., Couto, G. S., Schnorr-Müller, A.,Riffel, R. A.,2016, MNRAS, 459, 4485.[Li et al.2011]li11 Li, J-T., Wang, Q. D., Li, Z., Chen, Y., 2011, ApJ, 737, 41.[Li, Shen & Kim.2015]li15 Li, Z. Shen, J., Kim, W-T., 2015, ApJ, 806, 150. [Liu2011]liu11 Liu, J., 2011, ApJS, 192, 10.[Maciejewski2002]maciejewski Maciejewski, W., Teuben, P. J., Sparke, L. S., & Stone, J. M. 2002, MNRAS, 329, 502.[Malkan, Gorjian & Tam1998]malkan98 Malkan, M. A., Gorjian, V. & Tam, R., 1998, ApJS, 117,25.[Markwardt et al.2009]mark09 Markwardt C. B., 2009, in Bohlender D. A., Durand D., Dowler P., eds, ASP Conf. Ser. Vol. 411, Astronomical Data Analysis Software and Systems XVIII. Astron. Soc. Pac., San Francisco, p. 251[Martinsson et al.2013a]martinsson13 Martinsson, T. P. K., Verheijen, M. A. W., Westfall, K. B., Bershady, M. A., Schechtman - Root, A., Andersen, D. R., Swaters, R. A., 2013, A&A, 557, A130[Martinsson et al.2013b]martinsson13b Martinsson, T. P. K., Verheijen, M. A. W., Westfall, K. B., Bershady, M. A., Schechtman - Root, A., Andersen, D. R., Swaters, R. A., 2013, A&A, 557, A131[Mazzalay et al.2013]mazzalay13 Mazzalay, X. et al., 2013, MNRAS, 428, 2389.[Mazzalay et al.2014]mazzalay14 Mazzalay, X. et al., 2014, MNRAS, 438, 2036.[Meléndez et al.2008]melendez Meléndez, M., Kraemer, S. B., Schimitt, H. R., Crenshaw, D. M., Deo, R., P., Mushotzky, R. F., Bruhweiler, F. C., 2008, ApJ, 689, 95.[Munoz Marin et al.2007]munoz-marin07 Munoz Marin, V. M. et al., 2007, AJ, 134, 648.[Onodera2004]onodera Onodera, S. and Koda, J. and Sofue, Y. and Kohno, K., 2004, PASJ, 56, 439[Osterbrock1989]oster89 Osterbrock D. E., 1989, Astrophysics of Gaseous Nebulae and Active Galactic Nuclei. University Science Books, Mill Valley, CA.[Paturel et al.2003]paturel03 Paturel, G., Petit, C., Prugniel, Ph., Theureau, G., Rousseau, J., Brouty, M., Dubois, P. & Cambrésy, L., 2003, A&A, 412, 45.[Pérez-Ramírez et al.2000]perez-ramirez00Pérez-Ramírez, D., Knapen, J. H., Peletier, R. F., Laine, S., Doyon, R., Nadeau, D., 2000, MNRAS, 317, 234.[Pogge & Martini2002]pogge Pogge R. W., Martini P., 2002, ApJ, 569, 624[Quillen et al.2001]quillen Quillen, A. C., Alonso-Herrero, A., Lee, A., Shaked, S., Rieke, M. J., Rieke, G. H. 2001, ApJ, 547, 129[Regan, et al.1999]regan99 Regan, M. W. et al., 1999, ApJ, 117, 2676. [Rembold et al.2016]rembold16 Rembold, S. et al., 2016, MNRAS, submitted.[Repetto et al.2016]repetto Repetto, P., Faúndez-Abans, M., Freitas-Lemes, P., Rodrigues, I., de Oliveira-Abans, M., 2016, MNRAS, 464, 293 [Reunanen, Kotilainen & Prieto2002]reunanen02 Reunanen, J., Kotilainen, J. K., & Prieto, M. A., 2002, MNRAS, 331, 154. [Riffel et al.2006b]riffel2006b Riffel, Rogemar A., Rodríguez-Ardila A., Pastoriza M. G., 2006b, A&A, 457, 61.[Riffel et al.2008]riffel2008 Riffel, Rogemar A., Storchi-Bergmann, T., Winge, C., McGregor, P. J., Beck, T., Schmitt, H. 2008, MNRAS, 385, 1129.[Riffel et al.2010]profit Riffel R. A., 2010, Ap&SS, 327, 239.[Riffel, Storchi-Bergmann & Nagar2010]mrk1066-exc Riffel, Rogemar A., Storchi-Bergmann, T. & Nagar, N. M., 2010a, MNRAS, 404, 166.[Riffel et al.2010]mrk1066_pop Riffel, Rogemar A. & Storchi-Bergmann, T., Riffel, R., & Pastoriza, M. G., 2010, ApJ, 713, 469.[Riffel & Storchi-Bergmann2011a]mrk1066c Riffel, Rogemar A. & Storchi-Bergmann, T., 2011, MNRAS, 411, 469.[Riffel, Storchi-Bergmann & Winge2013]mrk79 Riffel, R. A., Storchi-Bergmann, T., Winge, C., 2013, 430, 2249.[Riffel et al.2015]n5929 Riffel, Rogemar A. & Storchi-Bergmann, T. & Riffel, R., 2015, MNRAS, 451, 3587.[Riffel et al.2017]llp_stel Riffel, Rogemar A., Storchi-Bergmann, T., Riffel, R., Dahmer-Hahn, L. G., Diniz, M. R., Schönell, A. J., Dametto, N. Z., 2017, MNRAS, submitted.[Riffel et al.2011]mrk1157_pop Riffel, R., Riffel, Rogemar A., Ferrari, F., & Storchi-Bergmann, T., 2011, MNRAS, 416, 493. [Rodríguez-Ardila et al.2004]ardila04 Rodríguez-Ardila, A.,Pastoriza, M. G., Viegas, S., Sigut, T. A. A., & Pradhan, A. K., 2004,A&A, 425, 457.[Sakamoto et al.1999a]sakamoto Sakamoto K., Okumura S. K., Ishizuki S., Scoville N. Z., 1999, ApJ, 525, 691[Sanchez et al.2015]sanchez15 Sanchez, S. et. al. 2015, A&A, 574, A47.[Sarzi et al.2010]sarzi10 Sarzi, M., et. al. 2010, MNRAS, 402, 2187.[Sarzi et al.2001]sarzi Sarzi, M.,Sarzi, M., Rix, H.-W., Shields, J. C., Rudnick, G., Ho, L. C., McIntosh, D. H., Filippenko, A. V., & Sargent, W. L. W. 2001, ApJ, 550, 65[Schnorr–Müller et al.2011]allan11 Schnorr Müller A., Storchi-Bergmann T., Riffel R. A., Ferrari F., Steiner J. E., Axon D. J., Robinson A., 2011, MNRAS, 413, 149[Schnorr–Müller et al.2014a]allan14 Schnorr Müller A., Storchi-Bergmann T., Nagar, N. M., & Ferrari, F. 2014a, MNRAS, 438, 3322 [Schnorr–Müller et al.2014b]allan14b Schnorr Müller A., Storchi-Bergmann T., Nagar, N. M., & Robinson, A., Lena, D., Riffel, R. A., Couto, G. S., 2014b, MNRAS, 437, 1708.[Shlosman et al.1990]shlosman Shlosman, I., Begelman, M. C., & Frank, J. 1990, Nature, 345, 679[Shostak1987]shostak87 Shostak, G. S. 1987, A&A, 175, 4.[Simões Lopes et al.2007]slopes Simões Lopes, R. D., Storchi-Bergmann, T., de Fátima Saraiva, M. and Martini, P., 2007, ApJ, 655, 718[Storchi-Bergmann et al.2007]thaisa07 Storchi-Bergmann, T., Dors Jr., O.,Riffel,R. A., Fathi, K.,Axon, D. J., & Robinson, A., 2007, ApJ, x,x[Sutherland et al.1993]sutherland93 Sutherland, R. S., Bicknell, G. V., Dopita, M. A., 1993, ApJ,414, 510. [Trippe et al.2010]trippe Trippe, M. L., Crenshaw, D. M., Deo, R. P., Dietrich, M. Kraemer, S. B., Rafter, S. E., Turner, T. J., ApJ, 725, 1749[van der Kruit & Allen1978]van van der Kruit, P. C., & Allen, R. J. 1978, ARA&A, 16, 103[Véron-Cetty & Veron2006]veron Véron-Cetty, M. P., & Véron, P., 2006, A&A, 455, 773.[Viegas & Contini1994]viegas94 Viegas, S., Contini, M., 1994, ApJ, 428, 113. [Young et al.1996]young96Young, J. S., Allen, L., Kenney, J. D. P., Lesser, A., Rownd, B., 1996, AJ, 112, 1903.[Westfall et al.2011]westfall11 Westfall, K. B., Bershady, M. A., Verheijen, M. A. W., 2011, ApJS, 193, 21. § FLUX DISTRIBUTIONS AND KINEMATICSFigures <ref> to <ref> show maps for the flux distributions, centroid velocity and velocity dispersion of the [S ii] λ6730 and λ6300emission lines for NGC 3982, NGC 4501, NGC 2787 and NGC 4450.
http://arxiv.org/abs/1704.08274v1
{ "authors": [ "Carine Brum", "Rogemar A. Riffel", "Thaisa Storchi-Bergmann", "Andrew Robinson", "Allan Schnorr-Muller", "Davide Lena" ], "categories": [ "astro-ph.GA" ], "primary_category": "astro-ph.GA", "published": "20170426181300", "title": "Dusty spirals versus gas kinematics in the inner kiloparsec of Four Low-Luminosity Active Galactic Nuclei" }
Guangdong Province Key Laboratory of Popular High Performance Computers, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen 518060, PR China [][email protected] authorDepartment of Physics, University of Fribourg, 1700 Fribourg, Switzerland Guangdong Province Key Laboratory of Popular High PerformanceComputers, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen 518060, PR China [][email protected] authorInstitute of Fundamental and Frontier Sciences, University of Electronic Science and Technology of China, Chengdu 610054, PR China Department of Physics, University of Fribourg, 1700 Fribourg, Switzerland Department of Radiation Oncology, Inselspital, Bern University Hospital and University of Bern, 3010 Bern, Switzerland Department of Physics, University of Fribourg, 1700 Fribourg, Switzerland Guangdong Province Key Laboratory of Popular High Performance Computers, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen 518060, PR China Complex networks have emerged as a simple yet powerful framework to represent and analyze a wide range of complex systems.The problem of ranking the nodes and the edges in complex networks is critical for a broad range ofreal-world problems because it affects how we access online information and products, how success andtalent are evaluated in human activities, and how scarce resources are allocated by companies and policymakers, among others.This calls for a deep understanding of how existing ranking algorithms perform, and which are their possible biases that may impair their effectiveness. Well-established ranking algorithms (such as the popular Google's PageRank) are static in nature and, as a consequence,they exhibit important shortcomings when applied to real networks that rapidly evolve in time.The recent advances in the understanding and modeling of evolving networks have enabled the developmentof a wide and diverse range of ranking algorithms that take the temporal dimension into account. The aim of this review is to survey the existing ranking algorithms, both static and time-aware, and their applications to evolving networks. We emphasize both the impact of network evolution on well-established static algorithms and the benefits from includingthe temporal dimension for tasks such as prediction of real network traffic, prediction of future links, and identification of highly-significant nodes.Ranking in evolving complex networks Ming-Yang Zhou December 30, 2023 ==================================== § STATUS * Intro: First draft. Waiting for Matus, Hao and Mingyang revision.* Static metrics: Advanced stage. waiting for Matus revision. DONE, very limited feedback is now needed from Manuel.* Impact of network evolution on static metrics: Almost finished, just a couple of comments left for Matus. DONE, two fragments to be checked by Manuel in 3.1. If time permits, I will add the contours in Figure 4.* Time-dependent metrics: Advanced stage. Waiting for Matus revision. DONE, just a few things to check and the relevance model part (modifications by Hao) to trim down.* Temporal networks: Advanced stage. Waiting for Matus revision. DONE, just a few small issues left to look at.* Recommender systems: Almost finished, just a couple of comments left for Matus. DONE, basically nothing left to do except me adding one figure in Sec. 6.2 (I do not have the related paper's source files with me).* Applications: 7.1 and 7.2 waiting for Matus revision. Is 7.3 ready for Manuel revision (Matus-Hao-Mingyang are all writers, so Manuel has to revise it)?* Conclusions and perspectives: First draft. Waiting for Matus, Hao and Mingyang revision.§ GENERAL REMARKS * When a paper is the subject of a sentence (e.g. Paper X investigated Y), we have to decide among the following options: ref. <cit.> investigated Y, <cit.> investigated Y, or Authors X et al. <cit.> investigated Y. The last option is my favorite one. Once we have chosen, we have to be consistent throughout the review.done in section 7.3* About the title. The current one is quiet general, which is maybe good. But if we may want to make it more specific, a good alternative could be something like "The temporal dimension of ranking in complex networks".* We need to decide node/vertex, link/edge, and be consistent with it throughout the review. I would choose node-edge, even though "link" is more common in computer science.* Let's keep American English throughout the review.* Rescale Pagerank showed in section 3 around equation 38, which also appear section 4 around equation 40 and 41. So it is possible to extract rescale pagerank as a general method.* Section 3.2 about Pagerank is similar with section 5.2.3 and section 4.2.5. In this sense, we may need think about that pagerank play too much important role in the review.* Due to so many time-bias or time-dependent pagerank variant algorithms, We should think is that really necessary to have all those in several sections even the purposes varies. Is that possible to combine or cut some of these in the text.* <Websites> as footnotes or as references? This needs to be made consistent throughout the review. § INTRODUCTION In a world where we need to make many choices and our attention is inherently limited, we often rely on, or at least use the support of, automated scoring and ranking algorithms to orient ourselves in the copious amount of available information. One of the major goals of ranking algorithms is to filter <cit.> the large amounts of available data in order to retrieve <cit.> the most relevant information for our purposes.For example, quantitative metrics for assessing the relevance of websites are motivated by the impossibility, for an individual web user, to browse manually large portions of the Web to find the information that he or she needs. The resulting rankings influence our choices of items to purchase <cit.>, the way we access scientific knowledge <cit.> and online content <cit.>. At a systemic level, ranking affects how success and excellence in human activity are recognized <cit.>, how funding is allocated in scientific research <cit.>, how to characterize and counteract the outbreak of an infectious disease <cit.>, and how undecided citizens vote for political elections <cit.>, for example.Given the strong influence of rankings on various aspects of our lives, achieving a deep understanding of how respective ranking methods work, what are their basic assumptions and limitations, and designing improved methods is of critical importance. Rankings' disadvantages, such as the biases that they can induce, can otherwise outweigh the benefits that they bring.Achieving such understanding becomes more urgent as a huge and ever-growing amount of data is created andstored every day and, as a result, automated methods to extract relevant information are poised to further gain importance in the future. An important class of ranking algorithms is based on the network representation of the input data. Complex networks have recently emerged as one of the leading frameworks to analyze a wide rangeof complex social, economic and information systems <cit.>.A network is composed of a set of nodes (also referred to as vertexes) and a set of relationships (edges, or links) between the nodes. A network representation of a given system greatly reduces the complexity of the original system and, if the representation is well-designed, it allows us to gain fundamental insights into the structure and the dynamics of the original system. Fruitful applications of the complex networks approach include prediction of spreading of infectious diseases <cit.>, characterizationand prediction of economic development of countries <cit.>,characterization and detection of early signs of financial crises <cit.>, anddesign of robust infrastructure and technological networks <cit.>, among many others. Network-based ranking algorithms use network representations of the data to compute the inherent value, relevance, or importance of individual nodes in the system <cit.>. The idea of using the network structure to infer node status is long known in social sciences <cit.>,where node-level ranking algorithms are typically referred to as centrality metrics <cit.>; in this review, we use the labels “ranking algorithms” and “centrality metrics” interchangeably. Today, research on ranking methods is being carried out by scholars from several fields (such as physicists, mathematicians, social scientists, economists, computer scientists). Network-based rankingalgorithms have drawn conspicuous attention from the physics community due to the direct applicability of statistical-physics concepts such as random walk and diffusion <cit.>, as well as percolation <cit.>. Real-world applications of network-based ranking algorithms cover a vast spectrum of problems, including the design ofsearch engines <cit.> and recommender systems <cit.>, evaluation of academic research <cit.>,and identification of the influential spreaders <cit.>.A paradigmatic example of a network-based ranking algorithm is the Google's popular PageRank algorithm. While originally devised to rank web pages <cit.>, it has since then found applications in a vast range of real systems <cit.>.While there are excellent reviews and surveys that address network-based rankingalgorithms <cit.> and random walksin complex networks <cit.>, the impact of network evolution on ranking algorithms is discussed there only marginally. This gap deserves to be filled because real networks usually evolve in time <cit.>, and temporal effects have been shown to strongly influence properties and effectivenessof ranking algorithms <cit.>.The aim of this review is to fill this gap and thoroughly present the temporal aspects of network-basedranking algorithms in social, economic and information systems that evolve in time. The motivation for this work is multifold. First, we want to present in a cohesive fashion information that is scattered across literature from diverse fields, providing scholars from diverse communities with a reference point on the temporal aspects of network-based ranking. Second, we want to highlight the essential role of the temporal dimension, whose omission can produce misleading results when it comes to evaluating the centrality of a node. Third, we want to highlight open challenges and suggest new research directions. Particular attention will be devoted to network-based ranking algorithms that build on classical statistical physics concepts such as random walks and diffusion. In this review, we will survey both static and time-aware network-based ranking algorithms. Static algorithms take into account the only network's adjacency matrix A (see paragraph <ref>) to compute node centrality, and they constitute the topic of Section <ref>. The dynamic nature of most of the real networks can cause static algorithms to fail in a number of real-world systems, as outlined in Section <ref>. Time-aware algorithms take as input the network's adjacency matrix at a given time t and/or the list of node and/or edge time-stamps. Time-aware algorithms for growing unweighted networks are be presented in Section <ref>.When multiple interactions occur among the network's nodes, algorithms based on temporal-network representations of the data are often needed; this class of ranking algorithms is presented in Section <ref> alongside a brief introduction to the temporal network representation of time-stamped datasets. Network-based time-aware recommendation methods and their applications are covered by Section <ref>; the impact of recommender systems on network evolution is also discussed in this section.Due to fundamental differences between the datasets where network centrality metrics are applied, no universal and all-encompassing ranking algorithm can exist. The best that we can do is to identify the specific ranking task (or tasks) that a metric is able to address. As it would be impossible to cover all the applications of network centrality metrics studied in the literature, we narrow our focus to applications where the temporal dimension plays an essential role (section <ref>). For example, we discuss how time-aware metrics can be used to early identify significant nodes (paragraph <ref>), how node importance can be used as a predictor of future GDP of countries (paragraph <ref>), and how time-aware metrics outperform static ones in the classical link-prediction problem (paragraph <ref>).We provide a general discussion on the problem of ranking in Section <ref>. In this discussion, we focus on possible ways of validating the existing centrality metrics, and on the importance of detecting and suppressing the biases of the rankings obtained with various metrics.The basic notation used in this review is provided in paragraph <ref>.While this review covers a broad range of interwoven topics, the chapters of this review are as self-contained as possible, and in principle, they can be read independently. We do discuss the underlying mathematical details only occasionally and refer to the corresponding references instead where the interested reader can find more details. § STATIC CENTRALITY METRICS Centrality metrics aim at identifying the most important, or central, nodes in the network. In this section, we review the centrality metrics that only take into account the topology of the network, encoded in the network's adjacency matrix A. These metrics neglect the temporal dimension and are thus referred to as static metrics in the following. Static centrality metrics have a long-standing tradition in social network analysis – suffice it to say that Katz centrality (paragraph <ref>) was introduced in the 50s <cit.>,betweeness centrality (paragraph <ref>) in the 70s <cit.>, and indegree (citation count, paragraph <ref>) is used to rank scholarly publications since the 70s. The crucial role played by the PageRank algorithm in the outstanding success of the web search engine Google was one of the main motivating factors for the new wave of interest in centrality metrics starting in the late 90s. Today, we are aware that static centrality metrics may exhibit severe shortcomings when applied to dynamically evolving systems – as will be examined in depth in the next sections – and, for this reason, they should never be applied withoutconsidering the specific properties of the system in hand. Static metrics are nevertheless important both as basic heuristic tools of network analysis, and as building blocks for time-aware metrics.§.§ Getting started: basic language of complex networks Since excellent books <cit.> and reviews <cit.> on complex networks and their applications have already been written, in the following we only introduce the network-analysis concepts and tools indispensable for the presented ranking methods. We represent a network (usually referred to as graph in mathematics literature)composed by N nodes and E edges by the symbol 𝒢(N,E). The network is completely described by its adjacency matrix A. In the case of an undirected network, matrix element A_ij=1 if an edge exists that connects i and j, zero otherwise; the degree k_i of node i is defined as the number of connections of node i. In the case of a directed network, A_ij=1 if an edge exists that connects i and j, zero otherwise. The outdegree k^out_i (indegree k^in_i) of node i is defined as the number of outgoing (incoming) edges that connect node i to other nodes.We introduce now four structural notions that will be useful in the following: path, path length, shortest path, and geodesic distance between two nodes. The definitions provided below apply to undirected networks, but they can be straightforwardly generalized to directed networks (in that case, we speak of directed paths).A network path is defined as a sequence of nodes such that consecutive nodes in the path are connected by an edge. The length of a path is defined as the number of edges that compose the path. The shortest path between two nodes i and j is the path of the smallest possible lenght that start in node i and end up in j – note that multiple distinct shortest paths between two given nodes may exist. The (geodesic) distance between two nodes i and j is the length of the shortest paths between i and j. We will use these basic definitions in paragraph <ref> to define the closeness and betweeness centrality.§.§ Degree and other local centrality metricsWe start by centrality metrics that are local in nature. By local metrics, we mean that only the immediate neighborhood of a given node is taken into account for the calculation of the node's centrality score, as opposed to, for example, eigenvector-based metrics that consider all network's paths (paragraph <ref>) and metrics that consider all network's shortest paths that pass through a given node to determine its score (paragraph <ref>).§.§.§ DegreeThe simplest structural centrality metric is arguably degree. For an undirected network, the degree of a node i is defined ask_i=∑_jA_ij.That is, the degree of a node is given by the number of edges attached to it. Using degree as a centrality metric assumes that a node is important if it has received many connections. Even though degree neglects nodes' positions in the network, its simple basic assumption may be reasonable enough for some systems.Even though the importance of the neighbors of a given node is presumably relevant to determine its centrality <cit.>, and this aspect is ignored by degree, one can argue that node degree still captures some part of node importance. In particular, degree is highly correlated with more sophisticated centrality metrics for uncorrelated networks <cit.>, and can perform better than more sophisticated network-based metrics in specific tasks such as identifying influential spreaders <cit.>. In the case of a directed network, one can define node degree either by counting the number of incoming edges or the number of outgoing edges – the outcomes of the two different counting procedures are referred to as node indegree and outdegree, respectively. In most cases, incoming edges can be interpreted as a recommendation, or a positive endorsement, received from the node from which the edge comes from. While the assumption that edges represent positive endorsements is not always true[In a signed social network, for example, an edge between two individuals can be either positive (friendship) or negative (animosity). See <cit.>, among others, for generalizations of popular centrality metrics to signed networks.], it is reasonable enough for a broad range of real systems and, unless otherwise stated, we assume it throughout this review.For example, in networks of academic publications, a paper's incoming edges represent its received citations, and their number can be considered as a rough proxy for the paper's impact[This assumption neglects the multi-faceted nature of the scientific citation process <cit.>, and the fact that citations can be both positive and negative – a citation can be considered as "negative" if the citing paper points out flaws of the cited paper, for example <cit.>. We describe in the following sections some of the shortcomings of citation count and possible ways to counteract them.]. For this reason, indegree is often used as a centrality metric in directed networksk^in_i=∑_jA_ij. Indegree centrality is especially relevant in the context of quantitative evaluation of academic research, wherethe number of received citations is often used to gauge scientific impact (see <cit.> for a review of bibliometric indices based on citation count). §.§.§ H-indexA limitation of the degree centrality is that it only considers the node's number of received edges, regardless of the centrality ofthe neighbors. By contrast, in many networks it is plausible to assume that a node is important if it is connected to other important nodes in the network. This thesis is enforced by eigenvector-based centrality metrics (section <ref>) by taking into account the whole network topology to determine the node's score. A simpler way to implement this idea is to consider nodes' neighborhood in the node score computation, without including the global network structure. The H-index is a centrality metric based on this idea. The H-index for unipartite networks <cit.> is based on the metric of the same name introduced by Hirsch for bipartite networks <cit.> to assess researchers' scientific output. Hirsch's index of a given researcher is defined as the maximum integer h such that at least h publications authored by that researcher received at least h citations. In a similar way, Lu et al. <cit.> define the H-index of a given node in amonopartite network as the maximum integer h such that there exist at least h neighbors of that node with at least h neighbors. In this way, a node's centrality score is determined by the neighbors' degrees, without the need for inspecting the whole network topology.The H-index brings some advantage with respect to degree in identifying the most influential spreaders in real networks <cit.>, yet the correlation between the two metrics is large for uncorrelated networks (, networks with negligible degree-degree correlations) <cit.>. More details on the H-index and its relation with other centrality metrics can be found in paragraph <ref>.§.§.§ Other local centrality metricsSeveral other methods have been proposed in the literature to build local centrality metrics. The main motivation to prefer local metrics to “global” metrics (such as eigenvector-based metrics) is that local metrics are much less computationally expensive and thus can be evaluated in relatively short time even in huge graphs. As paradigmatic examples, we mention here the local centrality metric introduced by Chen et al. <cit.> which considers information up to distance four from a given node to compute the node's score, and the graph expansion technique introduced by Chen et al. <cit.> to estimate a website's PageRank score, which will be defined in paragraph <ref>. As this review is mostly concerned with time-aware methods, a more detailed description of staticlocal metrics goes beyond its scope. An interested reader can refer to a recent review article by Lu et al. <cit.> for more information. §.§ Metrics based on shortest paths This paragraph presents two important metrics – closeness and betweenness centrality – that are based on shortest paths in the network defined in paragraph <ref>. §.§.§ Closeness centralityThe basic idea behind closeness centrality is that a node is central if it is “close” (in a network sense) to many other nodes. The first way to implement this idea is to define the closeness centrality <cit.> score ofnode i as the reciprocal of its average geodesic distance from the other nodesc_i=N-1/∑_j≠ id_ij,where d_ij denotes the geodesic distance between i and j.A potential trouble with Eq. (<ref>) is that if only one node cannot reached from node i, node i's score as determined by Eq. (<ref>) is identically zero. This makes Eq. (<ref>) useless for networks composed of more than one component. A common way to overcome this difficulty <cit.> is to define node i's centrality as a harmonic mean of its distance from the other nodesc_i=1/N-1∑_j≠ i1/d_ij.With this definition, a given node i receives zero contribution from nodes that are not reachable with a path that includes node i, whereas it receives a large contribution from close nodes – node i's neighbors give contribution one to i's closeness score, for example.The main assumption behind closeness centrality deserves some attention. If we are interested in node centrality as an indicator of the expectedhitting time for an information (or a disease) flowing through the network, closeness centrality implicitly assumes that information only flows through the shortest paths <cit.>. As pointed out by Borgatti <cit.>, this assumption makes the metric useful only for assessing node importance in situations where flow from any source node has a prior global knowledge of the network and knows how to reach the target node through the shortest possible path – this could be the case for a parcel delivery process or a passenger trip, for example. However, this assumption is unrealistic for real-world spreading processes where information flows typically have no prior knowledge ofthe whole network topology (think of an epidemic disease) and can often reach target nodes through longer paths<cit.>. Empirical experiments reveal that closeness centrality typically (but not for all datasets) underperforms with respect to local metrics in identifying influential spreadersfor SIS and SIR diffusion processes <cit.>.§.§.§ Betweenness centralityThe main assumption of betweenness centrality is thata given node is central if many shortest paths pass through it. To define the betweenness centrality score, we denote by σ_st the number of shortest paths between nodes s and t, and by σ_st^(i) the number of these paths that pass through node i. The betweenness centrality score of node i is given byb_i=1/(N-1)(N-2)∑_s≠ tσ_st^(i)/σ_st,Similarly to closeness centrality, betweenness centrality implicitly assumes that only the shortest paths matterin the process of information transmission (or traffic) between two nodes. While a variant of betweenness centrality which accounts for all network paths has also been considered, it often gives similar results as the original betweenness centrality <cit.>, and no evidence has been found that these metrics accurately reproduce information flows in real networks <cit.>. In addition, betweenness centrality performs poorly with respect to other centrality metrics in identifying influentialspreaders in numerical simulations with the standard SIR and SIS models <cit.>. Another drawback of closeness and betweenness centrality is their high computationalcomplexity mostly due to the calculation of the shortest paths between all pairs of nodes in the network.§.§ Coreness centrality and its relation with degree and H-index The coreness centrality (also known as the k-shell centrality <cit.>) is based on the idea of decomposing the network <cit.> to distinguish among the nodes that form the core of the network and the peripheral nodes. The decomposition process is usually referred to as the k-shell decomposition <cit.>. The k-shell decomposition starts by removing all nodes with only one connection (together with their edges), until no more such nodesremain, and assigns them to the 1-shell.For each remaining node, the number of edges connecting to the other remaining nodesis called its residual degree.After having assigned the 1-shell, all the nodes with residual degree 2 are recursively removed and the 2-shell is created. This procedure continues as the residual degree increases until all nodes in the networks have been assigned to one of the shells.The index c_i of the shell to which a given node i is assigned to is referred to as the node coreness (or k-shell) centrality. Coreness centrality can also be generalized to weighted networks <cit.>.There is an interesting mathematical connection between k-shell centrality, degree and H-index. In paragraph <ref>, we have introduced the H-indexof a node i as the maximum value h such that there exist h neighbors of i, each of themhaving at least h neighbors.This definition can be formalized by introducingan operator ℋ which acts on a finite number of real numbers (x_1,x_2,...,x_n) by returning the maximum integer y=ℋ(x_1,x_2,...,x_n)>0 such that there exist at least y elements in {x_1,…,x_n} each of which is larger or equal than y <cit.>. Denoting by (k_j_1,k_j_2,...,k_j_k_i) the degrees of the neighbors of a given node i, the H-index of node i can be written in terms of ℋ ash_i=ℋ(k_j_1,k_j_2,...,k_j_k_i).Lu et al. <cit.> generalize the h index by recursively defining the nth-order H-index of node i ash_i^(n)=ℋ(k_j_1^(n-1),k_j_2^(n-1),...,k_j_k_i^(n-1)),where h_i^(0)=k_i and h_i^(1) is the H-index of node i. For arbitrary undirected networks, it can be proven <cit.> that as n grows, h_i^(n) converges to coreness centrality c_i,lim_n→∞h_i^(n)=c_iThe family of H-indexes defined by Eq. (<ref>) thus bridges from node degree (h_i^(0)=k_i) to coreness that is from a local centrality metric to a global one. In many real-world networks, the H-indexoutperforms both degree and coreness in identifying influential spreaders as defined by classical SIR and SIS networkspreading models.Recently, Pastor-Satorras and Castellano <cit.> performed an extensive analytic and numerical investigation of the mathematicalproperties of the H-index, finding analytically that the H-index is expected to be strongly correlated with degree in uncorrelated networks. The same strong correlation is found also in real networks, and Pastor-Satorras and Castellano <cit.> point out that the H-index is a poor indicator of node spreading influence as compared to the non-backtracking centrality <cit.>. While the comparison of different metrics with respect to their ability to identify influential spreadersis not one of the main focus of the present review, we remind the interested reader to <cit.> for recent reports on this important problem.§.§ Eigenvector-based centrality metrics Differently from the local metrics presented in paragraph <ref> and the metrics based on shortest paths presented in paragraph <ref>, eigenvector-based metrics take into account all network paths to determine node importance. We refer to the metrics presented in this paragraph as eigenvector-based metrics because their respective score vectors can be interpreted as the principal eigenvector of a matrix which only depends on the network's adjacency matrix A.§.§.§ Eigenvector centralityOften referred to as Bonacich centrality <cit.>, the eigenvector centrality assumes that a node is important if it is connected to other important nodes. To build the eigenvector centrality, we start from a uniform score value s_i^(0)=1 for all nodes i. At each subsequent iterative step, we update the scores according to the equations_i^(n+1)=∑_jA_ij s_j^(n)=(As)_i.The vector of eigenvector centrality scores is defined as the fixed point of this equation.The iterations converge to a vector proportional to the principal eigenvector v_1 of the adjacency matrix A. To prove this, we first observe that for connected networks, v_1 is unique [For networks with more than one component, there can exist more than one eigenvector associated with the principal eigenvalue of A. However, it can be shown <cit.> that only one of them can have all components larger than zero, and thus it is the only one that can be obtained by iterating Eq. (<ref>) since we assumed s_i^(1)=1 for all i.] due to the Perron-Frobenius theorem <cit.>.We then expand the initial condition s^(0) in terms of the eigenvectors {v_α} of A ass^(0)=∑_αc_αv_α.Denoting the eigenvalue associated with the eigenvector v_α as λ_α and assuming that the eigenvalues are ordered and thus λ_α+1<λ_α, Eq. (<ref>) impliess^(n) = A^ns^(0) = c_1 λ_1^nv_1+∑_α>1c_αλ_α^nv_α.When n grows, the second term on the r.h.s. of the previous equation is exponentially small with respect to the r.h.s.'s first term, which results ins^(n) / λ_1^n→ c_1v_1.By iterating Eq. (<ref>) and continually normalizing the score vector, the scores converge to the adjacency matrix's principal eigenvector v_1. Consequently, the vector of eigenvector centrality scores v_1 satisfies the self-consistent equations_i=λ_1^-1∑_jA_ji s_j,which can be rewritten in a matrix form ass=λ_1^-1A s.While the assumptions of the eigenvector centrality seem plausible – high centrality nodes givehigh contribution to the centrality of the nodes they connect to. However, it has important shortcomings when we try to apply it to directed networks. If a node has received no incoming links, it has zero score according to Eq. (<ref>)and consequently gives zero contribution to the scores of the nodes it connects to. This makes the metric particularly useless for directed acyclic graphs (such as the network of scientific papers) where all papers have are assigned zero score. The Katz centrality metric, that we introduce below, solves this issue by adding a constant term to the r.h.s. of Eq. (<ref>), and thus achieves that the score of all nodes is positive. §.§.§ Katz centralityKatz centrality metric <cit.> builds on the same premise of eigenvector centrality – a node is important if it is connected to other important nodes – but, differently from eigenvector centrality, it assigns certain minimum score to each node. Different variants of the Katz centrality have been proposed in the literature <cit.> – we refer to the original papers the reader interested in the subtleties associated with the different possible definitions of Katz centrality, and we present here the variant proposed in <cit.>. The vector of Katz-centrality (referred to as alpha-centrality in <cit.>) scores is defined as <cit.>s=α A s+β e.where e is the N-dimensional vector whose elements are all equal to one. The solution s of this equation exists only if α<1/λ_1 where λ_1 is the principle eigenvalue of A. So we restrict the following analysis to this range of α values. The solution readss=β (1_N-αA)^-1 e,where 1_N denotes the N× N identity matrix. This solution can be written in the form of a geometric seriess=β ∑_k=0^∞α^k A^k e.Thus the score s_i of node i can be expressed ass_i =β (1+α ∑_j A_ij+α^2∑_j,lA_ij A_jl+α^3 ∑_j,l,mA_ij A_jl A_lm+𝒪(α^4)).Eq. (<ref>) (or, equivalently, (<ref>)) shows that the Katz centrality score of a node is determined by all network paths that pass through node i.Note indeed that ∑_jA_ij, ∑_j,lA_ij A_jl and ∑_j,l,mA_ij A_jl A_lm represent the paths of length one, two and three, respectively, that have node i as the last node. The parameter α determines how contributions of various paths attenuate with path length. For small values of α, long paths are strongly penalized and node score is mostly determined by the shortest paths of length one (i.e., by node degree). By contrast, for large values of α (but still α<λ_1^-1, otherwise the series in Eq. (<ref>) does not converge) long paths to node score.Known in social science since the 50s <cit.>, Katz centrality has been used in a number of applications – for example, a variant of the Katz centrality has been recently shown to substantially outperform other static centrality metrics in predicting neuronal activity <cit.>. While it overcomes the drawbacks of eigenvector centrality and produces a meaningful ranking also when applied to directed acyclic graphs or disconnected networks, the Katz centrality may still provide unsatisfactory results in networks where the outdegree distribution is heterogeneous. An important node is then able to manipulate the other nodes' importance by simply creating many edges towards a selected group of target nodes, and thus improving their score. Google's PageRank centrality overcomes this limitation by weighting less the edges that are created by nodes with many outgoing connections. Before introducing PageRank centrality, we briefly discuss a simple variant of Katz centrality that has been applied to networks of sport professional teams and players.§.§.§ Win-lose scoring systems for ranking in sport Network-based win-lose scoring systems aim to rank competitors (for simplicity, we refer to players in this paragraph) in sports based on one-versus-one matches (like tennis, football, baseball). In terms of complex networks, each player i is represented by a node and the weight of the directed edge j→ i represents the number of wins of player j against player i. The main idea of a network-based win-lose scoring scheme is that a player is strong if it is able to defeat other strong players (i.e., players that have defeated many opponents <cit.>). This idea led Park and Newman <cit.> to define the vector of win scores w and the vector of lose scores l through a variant of the Katz centrality equationw =(1-α A)^-1 k^out, l =(1-α A)^-1 k^in,where α∈(0,1) is a parameter of the method. The vector s of player scores is defined as the win-loss differential s=w-l. To understand the meaning of the win score, we use again the geometric series of matrices and obtainw_i=∑_jA_ji+α ∑_j,kA_kj A_ji+α^2 ∑_j,k,lA_kj A_jl A_li +𝒪(α^3).From this equation, we realize that the first contribution to w_i is simply the total number of wins of player i. The successive terms represent “indirect wins” achieved by player i. For example, the 𝒪(α) term represents the total number of wins of players beaten by player i. An analogous reasoning applies to the loss score l. Based on similar assumptions as the win-lose score, PageRank centrality metric has been applied to rank sport competitors by Radicchi <cit.>, leading to the Tennis Prestige Score [<http://tennisprestige.soic.indiana.edu/>]. A time-dependent generalization of the win-lose score will be presented in paragraph <ref>.§.§.§ PageRank PageRank centrality metric has been introduced by Brin and Page <cit.> with the aim to rank web pages in the Web graph, and there is general agreement on the essential role played by this metric in determining the outstanding success of Google's Web search engine <cit.>. For a directed network, the vector s of PageRank scores is defined by the equations=α P s+(1-α) vwhere P_ij=A_ij/k^out_j is the network's transition matrix, v is called the teleportation vector and α is called the teleportation parameter. A uniform vector (v_i=1 for all nodes i) is the original choice by Brin and Page, and it is arguably the most common choice for v, although benefits and drawbacks of other teleportation vectors v have been also explored in the literature <cit.>. This equation is conceptually similar to the equation that defines Katz centrality, with the important difference that the adjacency matrix A has been replaced by the transition matrix P. From Eq. (<ref>), it follows that the vector of PageRank scores can be interpreted as the leading eigenvector of the matrixG=α P+(1-α) vewhere e = (1,…,1)_N. G is often referred to as the Google matrix. We refer to the review article by Ermann et al. <cit.> for a detailed review of the mathematical properties of the spectrum of G.Beyond its interpretation as an eigenvalue problem, there exist also an illustrative physical interpretation of the PageRank equation. In fact, Eq. (<ref>) represents the equation that defines the stationary state of a stochastic process on the network where a random walker located at a certain node can make two moves: (1) with probability α, jump to another node by following a randomly chosen edge starting at the node where the walker is located; (2) with probability 1-α, “teleport” to a randomly chosen node. In other words, Eq. (<ref>) can be seen as the stationary equation of the process described by the following master equations^(n+1)=α P s^(n)+(1-α) v.The analogy of PageRank with physical diffusion on the network is probably one of the properties that have most stimulated physicists' interest in studying the PageRank's properties.It is important to notice that in most real-world directed networks (such as the Web graph), there are nodes (referred to as dangling nodes <cit.>) without outgoing edges. These nodes with zero out-degree receive score from other pages, but do not redistribute it to other nodes. In Equation (<ref>), the existence of dangling nodes makes the transition matrix ill-defined as its elements corresponding to transitions from the dangling nodes are infinite. There are several strategies how to deal with dangling nodes. For example, one can remove them from the system – this removal is unlikely to significantly affect the ranking of the remaining nodes as by definition they receive no score from the dangling nodes. One can also artificially set k^out=1 for the dangling nodes which does not affect the score of the other nodes <cit.>. Another strategy is to replace the ill-defined elements of the transition matrix with uniform entries; in other words, the transition matrix is re-defined asP_ij=A_ij/k^out_jifk_j^out>0, 1/N ifk_j^out=0.We direct the interested reader to the survey article by Berkhin <cit.> for a presentation of possible alternative strategies to account for the dangling nodes.Using the Perron-Frobenius theorem, one can show that the existence and uniqueness of the solution of Eq. (<ref>) is guaranteed if α∈[0,1) <cit.>. We assume α to be in the range (0,1) in the following. The solution of Eq. (<ref>) readss=(1-α) (1-α P)^-1 v.Similarly as we did for Eq. (<ref>), s can be expanded by using the geometric series to obtains=(1-α) ∑_k=0^∞α^k P^k v.As was the case for the Katz centrality, also the PageRank score of a given node is determined by all the paths that pass through that node. The teleportation parameter α controls the exponential damping of longer paths. For small α (but larger than zero), long paths give a negligible contribution and – if outdegree fluctuations are sufficiently small – the ranking by PageRank approximately reduces to the ranking by indegree <cit.>. When α is close to (but smaller than) one, long paths give a substantial contribution to node score. While there is no universal criterion for choosing α, a number of studies (see <cit.> for example) have pointed out that large values of α might lead to a ranking that is highly sensible to small perturbations on the network's structure, which suggests <cit.> that values around 0.5 should be preferred to the original value set by Brin and Page (α=0.85).An instructive interpretation of α comes from viewing the PageRank algorithm as a stochastic process on the network (Eq. (<ref>)). At each iterative step, the random walker has to decide whether to follow a network edge or whether to teleport to a randomly chosen node. On average, the length of the network paths covered by the walker before teleporting to a randomly chosen node is given by <cit.>⟨l|_⟩RW= (1-α) ∑_k=0^∞k α^k=α / (1-α),which is exactly the ratio between the probability of following a link and the probability of random teleportation. For α=0.85, we obtain ⟨l|_⟩RW=5.67 which corresponds to following six edges before teleporting to a random node. Some researchers ascribe the PageRank's success to the fact that the algorithm provides a reasonable model of the real behavior of Web users, who surf the Web both by following the hyperlinks that they find in the webpages, and – perhaps when they get bored or stuck – by restarting their search (teleportation). For α=0.5, we obtain ⟨l|_⟩RW=1. This choice of α may better reflect the behavior of researchers onthe academic network of papers (one may feel compelled to check the references in a work of interest but rarely to check the references in one of the referred papers). Chen et al. <cit.> use α=0.5 for a citation network motivated by the finding that half of the entries in the reference list of a typical publication cite at least one other article in the same reference list, which might further indicate that researchers tend to cover paths up to the length of one when browsing the academic literature.§.§.§ PageRank variants PageRank is built on three basic ingredients: the transition matrix P, the teleportation parameter α, and the teleportation vector v. With respect to the original algorithm, the majority of PageRank variants are still based on Eq. (<ref>), and modify one or more than one of these three elements. In this paragraph, we focus on PageRank variants that do not explicitly depend on edge time-stamp.We present three such variants; the reader interested in further variations is referred to the review article by Gleich <cit.>. Time-dependent variants of PageRank and Katz centrality will be the focus of Section <ref>. Reverse PageRank The reverse PageRank <cit.> score vectorfor a network described by the adjacency matrix Ais defined as the PageRank score vector for the network described by the transpose A of A. In other words, reverse-PageRank scores flow in the opposite edge direction with respect to PageRank scores. Reverse PageRank has been also referred to as CheiRank in the literature <cit.>. In general, PageRank and reversed PageRank provide different information on the node role in a network. Large reverse PageRank values mean that the nodes can reach many other nodes in the network. An interested reader can find more details in the review article by Ermann et al. <cit.>. LeaderRank In the LeaderRank algorithm <cit.>, a “ground-node” is added to the original network and connected with each of the network's N nodes by an outgoing and incoming link. A random walk is performed on the resulting network; the LeaderRank score of a given node is given by the fraction of time the random walker spends on that node. With respect to PageRank, LeaderRank keeps the idea of performing a random walk on network's links, but it does not feature any teleportation mechanism and, as a consequence, it is parameter-free. By adding the ground node, the algorithm effectively enforces a degree-dependent teleportation probability[From a node of the original degree k_i, the probability to follow one of the actual links is k_i / (k_i + 1) as opposed to the probability 1 / (k_i + 1) to move to the ground node and then consequently to a random one of the network's N nodes.]. Lu et al. <cit.> analyzed online social network data to show that the ranking by LeaderRank is less sensitive than that by PageRank to perturbationsof the network structure and to malicious manipulations on the system. Further variants based on degree-dependent weights have been used to detect influential and robust nodes <cit.>. Furthermore, the “ground-node” notion of LeaderRank has also been generalized to bipartite networks by network-based recommendation systems <cit.>.PageRank with “smart” teleportation In the original PageRank formulation,the teleportation vector v represents the amount of score that is assigned to each node by default, independently of network structure. The originalPageRank algorithm implements the simplest possible teleportation vector, v=e/N, where e_i=1 for all components i. This is not the only possible choice, and one might instead assume that higher-degree nodes should be given higher baseline score than obscure nodes. This thesis has been referred to as “smart” teleportation by Lambiotte and Rosvall <cit.>,and it can be implemented by setting v_i∝ k^in_i. Smart-teleportation PageRank score can still be expressed in terms of network paths through Eq. (<ref>); differently from PageRank, the zero-order contribution to a given node's smart-teleportation PageRank score is its indegree. The ranking by smart-teleportation PageRank results to be remarkably more stable than original PageRank with respect to variations in the teleportation parameter α <cit.>. A detailed comparison of the two rankings as well as a discussion of alternative teleportation strategies can be found in <cit.>. Pseudo-PageRank It is worth mentioning that sometimes PageRank variants are built on a different definition of node score, based on a column sub-stochastic matrix[A matrix Q is column sub-stochastic if and only if ∑_iQ_ij≤ 1.] Q and a non-normalized additive vector f (f_i ≥ 0). The corresponding equation that define the vector s of nodes' score iss=α Q s+f.In agreement with the review by Gleich <cit.>, we refer the problem of solving Eq. (<ref>) as the pseudo-PageRank problem.One can prove that the pseudo-PageRank problem is equivalent to a PageRank problem (theorem 2.5 in <cit.>). More precisely, let y be the solution of a pseudo-PageRank system with parameter α, column sub-stochastic matrix Q and additive vector f. If we definev: = f /(ef) and x := y / (ey), then x is the solution of a PageRank system with teleportation parameter α, stochastic matrix P = Q + vc, and teleportation vector v, where c = e- eQ is the correction vector needed to make Q stochastic.The (unique) solution y of Eq. (<ref>) thus inherits the properties of the corresponding PageRank solution x – the two solutions y and x only differ by a uniform normalization factor. Since PageRank and pseudo-PageRank problem are equivalent, we use the two descriptions interchangeably in the following.§.§.§ HITS algorithmHITS (Hyperlink-Induced Topic Search) algorithm <cit.> is a popular eigenvector-based ranking method originally aimed at ranking webpages in the WWW. In a unipartite directed network, the HITS algorithm assigns two scores to each node in a self-consistent fashion. Score h – referred to as node hub-centrality score – is large for nodes that point to many authoritative nodes. The other score a – referred to as node authority-centrality score – is large for nodes that are pointed by many hubs. The corresponding equations for node scores a and h reada =α A h, h =β A a,where α and β are parameters of the method. The previous equations can be rewritten asA A a =λ a, A A h =λ h,where λ=(αβ)^-1. The resulting score vectors a and h are thus eigenvectors of the matrices A A and A A, respectively. The HITS algorithm has been used, for example, to rank publications in citation networks by the literature search engine Citeseer <cit.>. In the context of citation networks, it is natural to identify topical reviews as hubs, since they contain many references to influential papers in the literature. A variant of the HITS algorithm where a constant additive term is added to both node scores is explored by Ng et al. <cit.> and results in a more stable ranking of the nodes with respect to small perturbations in network structure. An extension of HITS to bipartite networks <cit.> is presented in paragraph <ref>.§.§ A case study: node centrality in the Zachary's karate club networkIn this subsection, we illustrate the differences of the above-discussed node centrality metrics on the example of the small Zachary karate club network <cit.>. The Zachary Karate Club network captures friendships between the members of a US karate club (see Figure <ref> for its visualization). Interestingly, the club has split in two parts as a result of a conflict between its the instructor (node 1) and the administrator (node 34); the network is thus often used as one of a test cases for community detection methods <cit.>. The rankings of the network's nodes produced by various node centrality metrics are shown in Figure <ref>. The first thing to note here that nodes 1 and 34 are correctly identified as being central to the network by all metrics. The k-shell metric produces integer values distributed over a limited range (in this network, the smallest and largest k-shell value are 1 and 4, respectively) which leads to many nodes obtaining the same rank (a highly degenerate ranking).While most nodes are ranked approximately the same by all metrics, there are also nodes for which the metrics produced widely disparate results. Node 8, for example, is ranked high by k-shell but low by betweenness. An inspection of the node's direct neighborhood reveals why this is the case: node 8 is connected high-degree nodes 1, 2, 4, and 14, which directly assures its high k-shell value of 4. At the same time, those neighboring nodes themselves are connected, which creates shortcuts circumventing node 8 and implies that node 8 lies on very few shortest paths. Node 10 has low degree and thus also low PageRank score (among the evaluated metrics, these two have the highest Spearman's rank correlation). By contrast, its closeness is comparatively high as the node is centrally located (, the two central nodes 1 and 34 can be reached from node 10 in two and one steps, respectively). Finally, while PageRank and eigenvector centrality are closely related and their values are quite closely correlated (Pearson's correlation 0.89), they still produce rather dissimilar rankings (their Spearman's correlation 0.68 is the lowest among the evaluated metrics). §.§ Static ranking algorithms in bipartite networks There are many systems that are naturally represented by bipartite networks: users are connected with the content that they consumed online, customers are connected with the products that they purchased, scientists are connected with the papers that they authored, and many others. In a bipartite network, two groups of nodes are present (such as the customer and product nodes, for example) and links exist only between the two groups, not within them. The current state of a bipartite network can be captured by the network's adjacency matrix B, whose element B_iα is one if node i is connected to node α and zero otherwise. Note that to highlight the difference between the two groups of nodes, we use Latin and Greek letters, respectively, to label them. Since a bipartite network's adjacency matrix is a different mathematical object than a monopartite network's adjacency matrix, we label them differently as B and A, respectively. While A is by definition a square matrix, B in general is not. Due to this difference, the adjacency matrix of a bipartite network is sometimes referred to as the incidence matrix in graph theory <cit.>.Since bipartite networks are composed of two kinds of nodes, one might be interested in scoring and rankings the two groups of nodes separately. Node-scoring metrics for bipartite networks thus usually give two vectors of scores, one for each kind of nodes, as output. In this paragraph, we present three ranking algorithms (coHITS, method of reflections, and the fitness-complexity algorithm) specifically aimed at bipartite networks, together with their possible interpretation in real socio-economic networks.Before presenting the metrics, we stress that neglecting the connections between nodes of the same kind can lead to a considerable loss of information. For example, many bipartite networks are embedded in a social network of the participating users. In websites like <Digg.com> and <Last.fm>, users can consume content and at the same time they can also select other users as their friends and potentially be influenced by their friends' choices <cit.>. Similarly, the scholarly network of papers and their authors is naturally influenced by personal acquaintances and professional relationships among the scientists <cit.>.In addition, often more than two layers are necessary to properly account for different types of interactions between the nodes <cit.>. We focus here on the settings where a bipartite network representation of the input data provides sufficiently insightful results, and refer to <cit.> for representative examples of ranking algorithms for multilayer networks.§.§.§ Co-HITS algorithm Similarly to the HITS algorithm for monopartite networks, co-HITS algorithm <cit.> works with two kinds of scores. Unlike HITS, co-HITS assigns one kind of scores to nodes in one group and the other kind of scores to nodes in the other group; each node is thus given one score. Denoting the two groups of nodes as 1 and 2, respectively, and their respective two score vectors as x={x_i} and y={y_α}, the general co-HITS equations take the formx_i= (1-λ_1)x_i^0 + λ_1 ∑_α w_iα^21 y_α,y_α = (1-λ_2)y_α^0 + λ_2 ∑_i w_α i^12 x_i.Here λ_1 and λ_2 are analogs of the PageRank's teleportation parameter and x_i^0 and y_α^0 are provide baseline, in general node-specific, scores that are given to all nodes regardless of the network topology. The summation terms represent the redistribution of scores from group 2 to group 1 (through w_iα^21) and vice versa (through w_α i^12). The transition matrices w^21 and w^12 as well as the baseline score vectors x^0 and y^0 are column-normalized, which implies that the final scores can be obtained by iterating the above-specified equations without further normalization.Similarly to HITS, equations for x_i and y_α can be combined to obtain a set of equations where only node scores of one node group appear. Connections between the original iterative framework and various regularization schemes, where node scores are defined through optimization problems motivated by the co-HITS equations, are explored by Deng et al. <cit.>.One of the studied regularization frameworks is eventually found to yield the best performance in a query suggestion task when applied to real data.§.§.§ Method of reflections The method of reflections (MR) was originally devised by Hidalgo and Hausmann to quantify the competitiveness of countries and the complexity of products based on the network of international exports <cit.>. Although the metric can be applied to any bipartite network, in order to present the algorithm, we use the original terminology where countries are connected with the products that they export. The method of reflections is an iterative algorithm and its equations readk_i^(n)=1/k_i∑_αB_iα k_α^(n-1), k_α^(n)=1/k_α∑_i B_iα k_i^(n-1)where k_i^(n) is the score of country i at iteration step n, and k_α^(n) is the score of product α at step n. Both scores are initialized with node degree (k_i^(0)=k_i and k_α^(0)=k_α). In the original method, a threshold value is set, and when the total change of the scores is smaller than this value, the iterations stop.The choice of the threshold is important because the scores converge to a trivial fixed point <cit.>. The threshold has to be big enough so that rounding errors do not exceed the differences among the scores, as discussed in <cit.>.An eigenvector-based definition was given later by Hausmann et al. <cit.>, which yields the same results as the original formulation and it has the advantage of making the threshold choice unnecessary.Another drawback of the iterative procedure is that while one might expect that subsequent iterations “refine” the information provided by this metric, Cristelli et al. <cit.> point out that the iterations of the metric shrink information instead of refining it. Furthermore, Mariani et al.<cit.> noticed that stopping the computation after two iterations maximizes the agreement between the country ranking by the MR score and the country ranking by their importance for the structural stability of the system. While the country scores by the MR have been shown to provide better predictions of the economical growth of countries as compared to traditional economic indicators <cit.>, both the economical interpretation of the metric and its use as a predictive tool have raised some criticism <cit.>.The use of this metric as a predictor of GDP will be discussed in paragraph <ref>.§.§.§ Fitness-complexity metric Similarly to the method of reflections, the Fitness-Complexity (FC) metric aims to simultaneously measure the competitiveness of countries' (referred to as country fitness) and the products' level of sophistication (referred to as product complexity) based on international trade data <cit.>. The basic thesis of the algorithm is that competitive countries tend to diversify their export basket, whereas sophisticated products tend to be only exported by the most diversified (and thus the most fit) countries. The rationale of this thesis is that the production of a given good requires a certain set of diverse technological capabilities. More complex products require more capabilities to be produced and, for this reason, they are less likely to be produced by developing countries whose productive system has a limited range of capabilities. This idea is described more in detail and supported by a toy model in <cit.>.The metric's assumptions can be represented by the following iterative set of equationsF_i^(n) = ∑_αB_iα Q_α^(n-1),Q_α^(n) = 1/∑_i B_iα/F_i^(n-1)where F_i and Q_α represent the fitness of country i and the complexity of product α, respectively. After each iterative step n, both sets of scores {F_i^(n)} and {Q_α^(n)} are further normalized by their average value F^(n) and Q^(n), respectively.To understand the effects of non-linearity on product score, consider a product α_1 with two exporters i_1 and i_2 whose fitness values are 0.1 and 10, respectively. Before normalizing the scores, product α_1 achieves the score of 0.099 which is largely determined by the score of the least-fit country. By contrast, a product α_2 that is only exported by country i_2 achievesthe score of 10 which is much higher than the score of product α_1. This simple example shows well the economic interpretation of the metric. First, if there is a low-score country that can export a given product, the complexity level of this product is likely to be low. By contrast, if only high-score countries are able to export the product, the product is presumably difficult to be produced and it should be thus assigned a high score. By replacing the 1/F_i terms in (<ref>) with 1/F_i^γ (γ>0), one can tune the role of the least-fit exporter in determining the product score. As shown in <cit.>, when the exponent γ increases, the ranking of the nodes better reproduces their structural importance, but at the same time it is more sensitive to noisy data.Variants of the algorithm have been proposed in order to penalize more heavily products that are exported by low-fitness countries <cit.> and to improve the convergence properties of the algorithm <cit.>. Importantly, the metric has been shown <cit.> to outperform other existing centrality metrics (such as degree, method of reflections, PageRank, and betweenness centrality) in ranking the nodes according to their importance for the structural stability of economic and ecological networks. In particular, in the case of ecological plant-pollinator networks <cit.>, the “fitness score” F of pollinators can be interpreted as their importance, and the “complexity score” Q of plants as their vulnerability. When applied to plant-pollinator networks, the fitness-complexity algorithm (referred to as MusRank by Dominguez et al. <cit.>) reflects the idea that important insects pollinate many plants, whereas vulnerable plants are only pollinated by the most diversified – “generalists” in the language of biology – insects. §.§ Rating-based ranking algorithms on bipartite networks In the current computerized society where individuals separated by hundreds or thousands kilometers who have never met each other can easily engage in mutual interactions, reputation systems are crucial in creating and maintaining the level of trust among the participants that is necessary for the system to perform well <cit.>. For example, buyers and sellers in online auction sites are asked to evaluate each others' behavior and the obtained information is used to obtain their trustworthiness scores. User feedback is typically assumed in the form of ratings in a given rating scale, such as the common 1-5 star system where 1 star and 5 stars represent the worst and best possible rating, respectively (this scale is employed by the important e-commerce such as Amazon and Netflix, for example). Reputation systems have been shown to help the buyers avoid fraud <cit.> and the sellers with high reputation fare on average better than the sellers with low reputation <cit.>. Note that the previously discussed PageRank algorithm and other centrality metrics discussed in Section <ref> also represent particular ways of assessing the reputation of nodes in a network. In that case, however, no explicit evaluations are required because links between nodes of the network are interpreted as implicit endorsements of target nodes by the source nodes.The most straightforward way to aggregate the ratings collected by a given object is to compute their arithmetical mean (this method is referred to as mean below). However, such direct averaging is rather sensitive to noisy information (provided by the users who significantly differ from the prevailing opinion about the object) and manipulation (ratings intended to obtain the resulting ranking of objects) <cit.>. To enhance the robustness of results, one usually introduces a reputation system where reputation of users is determined along with the ranking of objects. Ratings by little reputed users are then assumed to have potentially adverse affect on the final results, and they are thus given corresponding low weights which helps to limit their impact on the system <cit.>.A particularly simple reputation system, iterative refinement (IR), has been introduced by Laureti et al. <cit.>. In IR, a user's reputation score is inversely proportional to the difference between the rating given by this user and the estimated quality values of the corresponding objects. The estimates of user reputation and object quality are iterated until a stable solution is found. We denote the rating given by user i to object α as r_iα and the number of ratings given by user i as k_i=∑_α A_iα where A_iα are elements of the bipartite network's adjacency matrix. The above-mentioned arithmetic mean can thus be represented as Q_α= ∑_i r_iαA_iα / k_α where the ratings of all users are assigned the same weight. The iterative refinement algorithm <cit.> assigns users weights w_i and estimates the quality of object α asQ_α= ∑_i w_i r_iα A_iα/∑_i w_iA_iαwhere the weights are computed asw_i = (1/k_i∑_α A_iα(r_iα - Q_α)^2 + ε)^-β.Here β≥0 controls how strongly are the users penalized for giving ratings that differ from the estimated object quality and ε is a small parameter that prevents the user weight from diverging in the (unlikely) case when r_iα=Q_α for all objects evaluated by user i.Equations (<ref>) and (<ref>) represent an interconnected system of equations that can be solved, similarly to HITS and PageRank, by iterations (the initial user weights are usually assumed to be identical, w_i=1). When β=0, IR simplifies to arithmetic mean. As noted by Yu et al. <cit.>, while β = 1/2 provides better numerical stability of the algorithm as well as translational and scale invariance, β = 1 is equivalent to a maximum likelihood estimate of object quality assuming that the individual “rating errors” of a user are normally distributed with unknown user-dependent variance. As shown by Medo and Wakeling <cit.>, the IR's efficacy is importantly limited by the fact that most real settings use a limited range of integer values. Yu et al. <cit.> generalize the IR by assigning a weight to each individual rating. There are two further variants of IR. The first variant is based on computing a user's weight as the correlation between the user's ratings and the quality estimates of the corresponding objects <cit.>. The second variant <cit.> is the same but it suppresses the weight of users who have rated only a few items, and further multiplies the estimated object quality with max_i∈𝒰_i w_i (, the highest weight among the users who have rated a given object). The goal of this modification is to suppress the objects that have been rated by only a few users (if an item is rated well by one or two users, it can appear at the top of the object ranking which is in most circumstances not a desired feature). § THE IMPACT OF NETWORK EVOLUTION ON STATIC CENTRALITY METRICS Many systems grow (or otherwise change) in time and it is natural to expect that this growth leaves an imprint on the metrics that are computed on static snapshots of these systems. This imprint often takes form of a time bias that can significantly influence the results obtained with a metric: nodes may score low or high largely due to the time at which they entered the system. For example, a paper published a few months ago is destined to rank badly by citation count because it had not have the time yet to show its potential by attracting a corresponding number of citations. In this section, we discuss a few examples of systems where various forms of time bias exist and, importantly, discuss how the bias can be dealt with or even entirely removed.§.§ The first-mover advantage in preferential attachment and its suppression Node in-degree, introduced in paragraph <ref>, is the simplest node centrality metric in a directed network. It relies on the assumption that if a node has been of interest to many nodes that havelinked to it, the node itself is important. Node out-degree, by contrast, reflects the activity of a node and thus it cannot be generally interpreted as the node's importance. Nevertheless, node in-degree too becomes problematic as a measure of node importance when not all the nodes are created equal. Perhaps the best example of this is a growing network where nodes appear gradually. Early nodes then enjoy a two-fold advantage over late nodes: (1) they have more time to attract incoming links, (2) in the early phase of the network's growth, nodes face less competition because there are fewer nodes to chooser from. Note that any advantage given to a node is often further magnified by preferential attachment <cit.>, also dubbed as cumulative advantage, which is in effect in many real systems <cit.>.We assume now the usual preferential attachment (PA) setting where at every time step, one node is introduced in the system and establishes directed links to a small number m of existing nodes. The probability of choosing node i with in-degree k_i is proportional to k_i + C. Nodes are labeled with their appearance time; node i has been introduced at time i. The continuum approximation (see <cit.> and Section VII.B in <cit.>) is now the simplest approach to study the evolution of the node mean in-degree k_i(n) [here n is the time counter that equals the current number of nodes in the network, n]. The change of k_i(n) in time step t is equal to the probability of receiving a link at time t, implyingk_i(n)/ n = m k_i(n) + C/∑_j k_j(n) + Cwhere the multiplication with m is due to m links being introduced in each time step. Since each of those links increases the in-degree of one node by one, the denominator follows a deterministic trajectory and we can write ∑_j [k_j(n) + C] = mn + Cn. The resulting differential equation for k_i(n) can be then easily solved to show that the expected in-degree of node i after introducing n nodes, k_i(n), is proportional to C[(i/n)^-1/(1+C/m)-1]. Here t:=i/n can be viewed as the “relative birth time” of node i. Newman <cit.> used the master equation formalism <cit.> to find the same result as well as to compute higher moments of k_i(n), and the divergence of k_i(n):=k(t) for t→0 (corresponding to early papers in the thermodynamic limit of system size) has been dubbed as the “first-mover advantage”. The generally strong dependence of k(t) on t for any network size n implies that only the first handful of nodes have the chance to become exceedingly popular (see Figure <ref> for an illustration). To counter the strong advantage of early nodes in systems under influence of the preferential attachment mechanism, Newman <cit.> suggests to quantify a node's performance by computing the number of standard deviations by which its in-degree exceeds the mean for the node's appearancetime (a quantity of this kind is commonly referred to as the z-score (see paragraph <ref> for a general discussion of this approach). The z-score can be thus used to identify the nodes that “beat” PA and attract more links than one would expect based on their time of arrival. According to the intermediateresults provided by Newman <cit.>, the score of a paper that appeared at time t=i/n and attracted k links readsz(k, t) = k - μ(t)/σ(t) = k - C(t^-β-1)/[Ct^-2β(1-t^β)]^1/2where β = m / (m + C). Parameters m and C are to be determined from the data (in the case of the APS citation data analyzed by Newman <cit.>, the best fit of the overall degree distribution is obtained with C=6.38 and m=22.8, leading to β=0.78; note that the value of C is substantially larger than the often-assumed C=1). While this is not clear from the published manuscript <cit.>, the preprint version available at <https://arxiv.org/abs/0809.0522> makes it clear that paper z scores are actually obtained by evaluating μ(t) and σ(t) empirically by considering “a Gaussian-weighted window of width 100 papers around the paper of interest”.The use of empirically observed values of μ(t) and σ(t) rather than the expected value under the basic preferential attachment model is preferable due to the limitation of the model in describing the growth of real citation networks.In particular, the growth of a network under preferential attachment has been shown to depend heavily and permanently on the initial condition <cit.>, and the original preferential attachment model has been extended by introducing node fitness <cit.> and node aging in particular <cit.> as two essential additional driving forces that shape the network growth. Differently from Eq. (<ref>), which relies on rather simplistic model assumptions, the z score built on empirical quantities isindependent of the details of the network's evolution. There is just one parameter to determine: the size of the window used to compute μ(t) and σ(t).The resulting z score was found to perform well in the sense of selecting the nodes with arrival times rather uniformly distributed over the system's lifespan, and the existence of those papers is said to provide “a hopeful sign that we as scientists do pay at least some attention to good papers that come along later” <cit.>. The selected papers have been revised by Newman <cit.> five years laterand they have been found to outperform randomly drawn control groups with the same prior citation counts. Since a very recent paper can score high on the basis of a single citation, a minimum citation count of five has been imposed in the analysis. An analogous approach is used in the following paragraph to correct the bias of PageRank. A general discussion of rescaling techniques for static centrality metrics is presented in Section <ref>.§.§ PageRank's temporal bias and its suppression As explained in Section <ref>, the PageRank score of a node can be interpreted as the probability that a random walker is found atthe node during its hopping on the directed network. The network's evolution strongly influences the resulting score values. For example,a recent node that had little time to attract incoming links is likely to achieve a low score that improves once the node isproperly acknowledged by the system. Thecorresponding bias ofPageRank against recent nodes has been documented in the World Wide Web data, for example <cit.>. Such bias can potentiallyoverride the useful information generated by the algorithm and renderthe resulting scores useless or even misleading. Understanding PageRank's biases and the situations in which they appear is therefore essential for the use of the algorithm in practice.While PageRank's bias against recent nodes can be understood asa natural consequence of these nodes still lacking the links that they will have the chance to collect in the future, structural features of the network can further pronounce the bias. The network of citations among scientific papers provides a particularly strikingexample because a paper can only cite papers that have been published inthe past. All links in the citation network thus aim back in time. This particular network feature makes it hard for random walkers on the network to reach recent nodes (the only way this can happen is through the PageRank's teleportation mechanism) and ultimately results in astrong bias towards old nodes that has been reported in the literature <cit.>.Mariani et al. <cit.> parametrized the space of network featuresby assuming that the decay of node relevance (which determines the rate atwhich a node receives new in-coming links) and the decay of node activity (which determines the rate at which a node creates new out-going links) occur atgenerally different time scales _R and _A, respectively. Note that this parametrization is built on the fitness model with aging that has been originally developed to model citation data <cit.>.The main finding by Mariani et al. <cit.> is that when the two time scales are of the similar order, the average time span the links that aim forward and backward in time is approximately the same and the random walk that underlies the PageRank algorithm can thus uncover some useful information (see Figure <ref> for an illustration).When _R≫_A, the aforementioned bias towards old nodesemerges and PageRank captures the intrinsic node fitness worse than the simple benchmark in-degree count. PageRank is similarly outperformed by node in-degree when _R≪_A for the very opposite reason as before: PageRank than favors recent nodes over the old ones.Having found that the scores produced by the PageRank algorithm may be biased by node age, the natural question that emerges is as to whether the bias can be somehow removed. While several modifications of the PageRank algorithm have been proposed to attenuate the advantage of old nodes (they will be the main subject of section <ref>), a simple method to effectively suppress the bias is to compare each node's score with the scores of nodes of similar age,in the same spirit as the z-score for citation count introduced in the previous paragraph. This has been addressed by <cit.> where the authors study the directed network of citations among scientific papers—a system where the bias towards the old nodes is particularly strong[In the context of the model discussed in the previous paragraph, _A = 0 in the citation network because a paper can establish links to other papers only at the moment of its appearance; it thus holds that _R≫_A and the bias towards old nodes is indeed expected.]. Mariani et al. <cit.> propose to compute the PageRank score of all papers and then rescale thescores by comparing the score of each paper with the scores of other paperspublished short before and short after. In agreement with the findings presented by Parolo et al. <cit.>, the window of papers included in therescaling of a paper's score is best defined on the basis of the number ofpapers in the window (the authors use the window of 1000 papers). Denoting the PageRank score of paper i as p_i and mean andstandard deviation of PageRank scores in the window aroundpaper i as μ_i(p) and σ_i(p), respectively, the proposed rescaled score of node i readsR_i(p) = p_i - μ_i(p)/σ_i(p).Note that unlike the input PageRank scores, the rescaled score can also be negative; this happens when a paper's PageRank score is lower than the average for thepapers published in the corresponding window. Quantities μ_i(p) and σ_i(p) dependon the choice of window size; this dependence is however rather weak.Section <ref> provides more details on this new metric, while its application to the identification of milestone papers in scholarly citation data is presented in paragraph <ref>. In addition to these results and the results presented in <cit.>, we illustrate here the metric's ability to actually remove the time bias in the model data that were already used to produce Figure <ref>. The results are presented in Figure <ref>. Panel A shows that the average age of the top 100 nodes as ranked by R(p) isessentially independent of model parameters, especially when compared with the broad range of τ_top 100(p) in Figure <ref>B. Panel B compares the correlation between the rescaled PageRank and node fitness with that between PageRank and node fitness. In the region θ_R≈θ_A, where we have seen before that PageRank is not biased toward nodes of specific age, there is no advantage to gain by using rescaled PageRank. There are nevertheless extended regions (the top left corner and the right side of the studied parameter range) where the original PageRank is heavily biased and the rescaled PageRank is thus able to better reflect the intrinsic node fitness. One can conclude that the proposed rescaling procedure removes time bias in model data and that this removal improves the ranking of nodes by their significance. The bias of the citation count and PageRank in real data as well as its removal by rescaled PageRank are exemplified in Figure <ref>. The benefits of removing the time bias in real data are presented in Section <ref>.§.§ Illusion of influence in social systems We close with an example of an edge ranking problem where the task is to rank connections in a social network by the level of social influence that they represent. In the modern information-driven era, the study of how information and opinionsspread in the society is as important as it ever was. The research of social influence is very active with networks playing animportant role <cit.>. A common approach to assess the strength of social influence in a social network where every user is connected with their friends is based on measuring the probability that user i collects item α if f_iα of i's friends have already collected it <cit.>. Here f_iα is usually called exposure as it quantifies how much a user has been exposed to an item (the underlying assumption is that each friend who has collected item α can tell user i about it). Note that the described setting can be effectively represented as a multilayer network <cit.> where users form a (possibly directed) social network and, at the same time, participate in bipartite user-item network which captures their past activities.However, the described measurement is influenced by preferential attachment that is present in many social and information networks. Even when no social influence takes place and thus the social network has no influence on the items collected by individual users, the probability that a user has collected itemα is k_α / U where U is the total number of users. Denoting the number of friends of user i as f_i (this is equal to the degree of user i in the social network), there are on average f_ik_α /U friends who have collected item α. At the same time, if preferential attachment is in effect in the system, the probability that user i himself collects item α is also proportional to the item degree k_α. Positive correlation between the likelihood of collecting an item and exposure is therefore bound to exist even when no real social influence takes place.The normalized exposuren_iα = max_t<t_iα(f_iα(t)/f_i k_α(t) k_min)was proposed by Vidmer et al. <cit.> to solve this problem. Here k_α(t) is the degree of item α at time t, t_iα is the time when user i has collected item α (if user i has not collected item α at all, the maximum is taken over all t values), and k_min is the minimum required item degree (without this constraint, items with low degree can reach high normalized exposure through normalization with k_α(t); such estimates based on a few events are noisy and thus little useful). Social influence measurements based on exposure and normalized exposure are compared on both real and model data by Vidmer et al. <cit.>. When standard exposure is used, real data show social influence even when they are randomized—a clear indication that the bias introduced by the presence of preferential attachment is too strong to be ignored. By contrast, the positive correlation between item collection probability and normalized exposure disappears when the input data are normalized. In parallel, when no social influence is built in the model data, item collection probability is correctly independent of normalized exposure whereas it grows markedly with standard exposure. By taking time into account and explicitly factoring out the influence of preferential attachment, normalized exposure avoids the problems of the original exposure metric. In the future, this new metric could be used to rank items in order of likelihood to be actually consumed by a given user. § TIME-DEPENDENT NODE CENTRALITY METRICS In the previous section, we have presented important shortcomings of static metrics when applied to evolving systems, and score-rescaling methods to suppress these biases. However, a direct rescaling of the final scores is not the only way to deal with the shortcomings of static metrics. In fact, a substantial amount of research has been devoted to ranking algorithms that explicitly include time in their defining equation. These time-dependent algorithms mostly aim at suppressing the advantage of old nodes and thus improving the opportunity to recent nodes to acquire visibility. Assuming that the network is unweighted, time-dependent ranking algorithms take as input the adjacency matrix A(t) of the network at a given time t and information on node age and/or edge time-stamp. They can be classified into three (not all-inclusive) categories: (1) node-based rescaled scores (paragraph <ref>); (2) metrics that contain an explicit penalization (often of an exponential or a power-law form) for older nodes or older edges (paragraph <ref>); (3) metrics based on models of network growth that assume the existence of a latent fitness parameter, which represents a proxy for the node's success in the system (paragraph <ref>). Time-dependent generalizations of the static reputation systems introduced in paragraph <ref> are also discussed in this section (paragraph <ref>).§.§ Striving for time-balance: Node-based time-rescaled metrics As we have discussed in the previous section, network centrality metrics can be heavily biased by node age when applied to evolving networks. In other words, large part of the variation in node score is explained by node age, which may be an undesirable property of centrality score if we are interested in untangling the intrinsic fitness, or quality, of a node. In this paragraph we present metrics that explicitly require that node score is not biased by node age. These metrics simply take the original static centrality scores as input, and rescale them by comparing each node's score with only the scores of nodes of similar age.Rescaling indegree In section <ref>, we have seen (Fig. <ref>) that in real information networks, node indegree can be biased by age due to the preferential attachment mechanism. This effect can be particularly strong for citation networks, as discussed in <cit.> and in section <ref>.In <cit.>, the bias of paper citation count (node in-degree) in the scientific paper citation network is removed byevaluating the mean μ_i(c) and standard deviation σ_i(c) of the citation count for papers published in a similar time as paper i. The systematic dependence of c_i on paper appearance time is then removed by computing the rescaled citation countR_i(c) = c_i-μ_i(c)/σ_i(c).Note that R_c,i represents the z-score of paper i within its averaging window. Values of R_c larger or smaller than zero indicate whether the paper is out- or under-performing, respectively, with respect to papers of similar age. There exists no principled criterion to choose the papers “published in a similar time” over which μ_i(c) and σ_i(c) are computed. In the case of citation data, one can constrain this computation to a fixed number, Δ, of papers published just before and just after paper i <cit.>, or weight papers score with a Gaussian function centered on the focal paper <cit.>. A statistical test in ref. <cit.> shows that in the APS data, differently from the ranking by citation count, the ranking by rescaled citation count R_c is not biased by paper age.A different rescaling equation has been used in <cit.>, where the citation count of each scientific paper i is divided by the average citation count of papers published in the same year and belonging to the same scientific domain as paper i. The resulting rescaled score, called c_f in <cit.>,has been found to follow a universal lognormal distibution – independently of paper field and age – in the Web of Science citation data. This finding has been challenged by subsequent studies <cit.>, which leaves it open whether an unbiased ranking of academic publications can be achieved through a simple rescaling equations.Since this review mostly focus on temporal effects, we refer the interested reader to the review article by Waltman <cit.> for details on the different field-normalization procedure introduced in the bibliometrics literature. As there are various possibilities how to rescale indegree, one could also devise a reverse-engineering approach: instead of using a rescaling equation and verifying whether it leads to an unbiased ranking of the nodes, one can require that the scores of papers of different age follow the same distribution and then infer the correct rescaling equation <cit.>.However, this procedure has been only applied (see <cit.> for details) to suppress the bias by field of papers published in the same year, and it remains unexplored for the time-bias suppression problem. Rescaling PageRank score Differently from the indegree's bias towards old nodes, PageRank can develop a temporal bias towards either old or recent nodes (see paragraph <ref> and <cit.>)As shown in <cit.> and illustrated here in Figure <ref>, the algorithm's temporal bias can be suppressed with the equationR_i(p) = p_i-μ_i(p)/σ_i(p),which is analogous to Eq. (<ref>) above; by assuming that the nodes are labeled in order of decreasing age, μ_i and σ_i are computed over a "moving" temporal window [i-Δ/2, i+Δ/2] centered on paper i. Also in the APS data, differently from the ranking by PageRank, the ranking by rescaled PageRank R_c is not biased by paper age <cit.>.The performance of rescaled PageRank and rescaled indegree in identifying groundbreaking papers will be discussed in detailand compared with the performance of other ranking methods in paragraph <ref>. Vaccario et al. <cit.> generalized this rescaling procedure based on the z-score to suppress both bias by age and bias by field of scientific papers' PageRank score.We emphasize that in principle the score by any structural centrality metric can be rescaled with an equation akin to Eq. (<ref>). The only parameter of the rescaled score is the size Δ of the temporal window used for the calculation of μ_i and σ_i; Mariani et al. <cit.> point out that too large Δ values would result in a ranking biased towards old nodes in a similar way as the ranking by PageRank[The ranking by PageRank score and the ranking by R(p) are the same for Δ=N.], whereas too small Δ values would result in a ranking heavily dependent on statistical fluctuations – a node could score high just by happening to have only low-degree papers in its small temporal window. One should always avoid both extremes when using a rescaled metric; however, a statistically-grounded guideline to choose Δ is still lacking.§.§ Metrics with explicit penalization for older edges and/or nodes We now consider variants of degree (paragraph <ref>) and of eigenvector-based centrality metrics (paragraphs <ref>-<ref>) where edges and/or nodes are weighted according to their age directly in the equation that defines node score. Hence, differently from rescaled metrics that normalize a posteriori the scores by static centrality metrics, the metrics presented in this paragraph directly include time information in the score calculation. §.§.§ Time-weighted degree and its use in predicting future trends Indegree (or degree) is the simplest method to rank nodes in a network. As we have seen in section <ref>, node degree is biased by age in growing networks that exhibit preferential attachment. As a consequence, degree can fail in individuating the nodes that will become popular in the future <cit.>, since node preferences may shift over time <cit.> and node attractiveness may decay over time <cit.>. One can consider a variant of degree where received links are weighted by a function of the edge age Δ t_ij:=t-t_ij, where t and t_ij denote the time at which the scores are computed and the time at which the edge from j to i was created, respectively. The weighted-degree score s_i(t) of node i at time t can be defined ass_i(t) = ∑_j A_ij(t) f(t-t_ij),where f(Δ t) is a function of edge age Δ t_ij= t-t_ij. Zeng et al. <cit.> set f(Δ t)=1-λ Θ(Δ t - T_r), where Θ(·) denotes the Heaviside function – Θ(x) is equal to one if x≥ 0, zero otherwise – and T_r is a parameter of the methodwhich implies that the resulting score s^rec_i(t;T_r) (referred to as Popularity-Based Predictor PBP in <cit.>) is a convex combination of node degree k_i(t) and node recent degree increase Δ k_i(t,T_r)=∑_jA_ij Θ(T_r-(t-t_ij)),s^rec_i(t;T_r)=k_i(t)-λ k_i(t-T_r)=(1-λ) k_i(t)+λ Δ k_i(t,T_r).The parameter T_r specifies the length of the “time window” over which the recent degree increase is computed. Note that s^rec_i reduces to node degree or recent degree increase for λ=0 or λ=1, respectively.Closely similar exponential penalization is considered by Zhou et al. <cit.>, leading tos_i^exp(t)=∑_jA_ij(t) e^-γ(t-t_ij)which is referred to as Temporal-Based Predictor in <cit.>; here γ>0 is a parameter of the method which is analogous to T_r before. Results obtained with time-weighted degree scores s^rec and s^exp on data from Netflix, Movielens and Facebookshow that: (1) s^rec with λ close to one performs better than degree in identifying the most popular nodes in the future <cit.>; (2) this performance is further improved by s^exp(Fig. 3 in <cit.>). Metric s^exp (albeit with a different parameter calibration) is also used in <cit.>, where it is referred to as the Retained Adjacency Matrix (RAM). s^exp is shown in <cit.> to outperform indegree, PageRank and other static centrality metrics in ranking scientific papers according to their future citation count increase and in identifying the top-cited papers in the future. The performance of s^exp is comparable to that of a time-dependent variant of Katz centrality, called Effective Contagion Matrix (ECM), that is discussed below (paragraph <ref>).Another type of weighted degree is the long-gap citation count <cit.> which measures the number of edges received by a node when it is at least T_min years old. Long-gap citation count has been applied to the network of citations between American movies and has been shown to be a good predictor for the presence of movies in the list of milestone movies edited by the National Film Registry (see <cit.> for details). §.§.§ Penalizing old edges: Effective Contagion MatrixEffective Contagion Matrix is a modification of the classical Katz centrality metric where the paths that connect a node with another nodeare not only weighted according to their length, but also according to the ages of their links <cit.>. For a tree-like network (like citation networks), one defines a modified adjacency matrix R(t,γ) (referred to as retained adjacency matrix by Ghosh et al. <cit.>) whose elements R_ij(t,γ)=A_ij(t) γ^Δ t_ij, where Δ t_ij is the age of the edge j→ i at the time t when the scores are computed. The aggregate score s_i of node i is defined ass_i(t)=∑_k=0^∞α^k ∑_j=1^N[R(t,γ)^k]_ij.According to this definition, paths of length k from thus have a weight which is given not only by α^k (as in the Katz centrality), but also depends on the ages of the edges that compose the path. By using the geometric series of matrices, and denoting by e the vector whose components are all equal to one, one can rewrite Eq. (<ref>) as s(t)=∑_k=0^∞α^k R(t,γ)^k e= (1-α R(t,γ))^-1 e,which results ins_i(t)=α ∑_j[R(t,γ)]_ij s_j(t)+1=α ∑_jA_ij(t) γ^Δ t_ij s_j(t)+1,which immediately shows that the score of a node is mostly determined by the scores of the nodes that recently pointed to it. ECM score has been found <cit.> to outperform other metrics (degree, weighted indegree, PageRank, age-based PageRank and CiteRank – see below for CiteRank'sdefinition) in ranking scientific papers according to their future citation count increase and in identifying papers that will become popular in the future; the second best-performing metric is weighted indegree as defined in Eq. (<ref>). Ghosh et al. <cit.> also note that the agreement between the ranking and the future citation count increase can be improved by using the various metrics as features in a support vector regression model. The analysis in <cit.> does not include TimedPageRank (described below); in future research, it will be interesting tocompare the predictive power of all the existing metrics, and compare it with the predictive power of network growth models <cit.>. §.§.§ Penalizing old edges: TimedPageRank Similarly to the methods presented in the previous paragraph, one canintroduce time-dependent weights of edges in PageRank's definition. A natural way to enforce this ideawas proposed by Yu et al. <cit.> in the form[Note that since ∑_jA_ij f(Δ t_ij)/k_j^out≤ 1 upon this definition, the vector of scores s is not normalized to one. Eq. (<ref>) thus defines a pseudo-PageRank problem, which can be put in a directed correspondence with a PageRank problem as defined in paragraph <ref> (see paragraph <ref> and <cit.> for more details).]s_i(t)=c ∑_jA_ij(t) f(Δ t_ij)s_j(t)/k^out_j+1-c,where Δ t_ij denotes the age of the link between nodes i and j. Yu et al. <cit.> use Eq. (<ref>) with f(Δ t)=0.5^Δ t, and define node i's TimedPageRank score as the product between its score s_i determined by Eq. (<ref>) and a factor a(t-t_i)∈[0.5,1] which decays with node age t-t_i.A similar idea of an explicit aging factor to penalize older nodes was also implemented by Baeza-Yates <cit.>. Yu et al. <cit.> used scientific publication citation data to show that the top 30 papers by TimedPageRank are more cited in the future years than the top papers by PageRank. §.§.§ Focusing on a temporal window: T-Rank, SARA T-Rank <cit.> is a variant of PageRank whose original aim is to rank websites. Thealgorithm assumes that we are mostly interested in the events that happened in a particular temporal window of interest [t_1,t_2]. For this reason, the algorithm favors the pages that received many incoming links and were often updated within the temporal window of interest. Time-stamps are thus weighted according to their temporal distance from [t_1,t_2], leading to a “freshness” function f(t) which is equal to or smaller than one if the link's creation time lies within the range [t_1,t_2] or outside, respectively. If a timestamp t does not belong to the temporal window,its freshness f(t) monotonously decreases with its distance from the temporal window of interest. The freshness of timestamps are then used to compute the freshness of nodes, of links and of page updates, which are the elements needed for the computation of the T-Rank scores. As the temporal window of interest can be chosen at will, the algorithm is flexible and can provide interesting insights into the web users' reactions after an unexpected event, such as a terror attack <cit.>.A simpler idea is to use only links that belong to a certaintemporal window to compute the score of a node. This idea has been applied to the citation network of scientific authors to compute an author-level score for a certain time window using a weighted variant of PageRank called Science Author Rank Algorithm (SARA) <cit.>. The main finding by Radicchi et al. <cit.> is that the authors that won a Nobel Prize are better identified by SARA than by indices based on indegree (citation count). The dynamic evolution of the ranking position of Physics' researchers by SARA can be explored in the interactive Web platform <http://www.physauthorsrank.org/authors/show>.§.§.§ PageRank with time-dependent teleportation: CiteRank In the previous paragraphs, we have presented variants of classical centrality metrics (degree, Katz centrality, and PageRank) where edges are penalized according to their age. For the PageRank algorithm, an alternative way to suppress the advantage of older nodes is to introduce anexplicit time-dependent penalization of older nodes in the teleportation term. This idea has led to the CiteRank algorithm which was introduced by Walker et al. <cit.> to rank scientific publications. The vector of CiteRank scores s can befound as the stationary solution of the following set of recursive linear equations[This definition is slightly different from that provided in <cit.> – the scores resulting from the definition adopted here are normalized to one and differ from those obtained with the definition in <cit.> by a uniform normalization factor.]s_i^(n+1)(t)=c ∑_jA_ji(t) s_j^(n)(t)/k^out_j+ (1-c) v(t,t_i)where, denoting by t_i the publication date of paper i and t the time at which the scores are computed, we definedv(t,t_i)=exp(-(t-t_i)/τ)/∑_j=1^Nexp(-(t-t_j)/τ).To choose the values of c and τ,the authors compare papers' CiteRank score with papers' future indegree increase Δ k^in, and find c=0.5, τ=2.6 years as the optimal parameters. With this choice of parameters, the age distribution of paper CiteRank score is in agreement with the age distribution of the papers' future indegree increase. In particular, both distributions feature a two-step exponential decay (Fig. <ref>). A theoretical analysis based on CiteRank in <cit.> suggests that the two-step decay of Δ k^in can be explained by two distinct and co-existing citation mechanisms: researchers cite a paper because they found it either directly or by following the references of a more recent paper. This example shows that well-designed time-dependent metrics are not only useful tools to rank the nodes, but can shed light into the behavior of the agents in the system. §.§.§ Time-dependent reputation algorithms A reputation system ranks individuals by their reputation from the most to the least reputed one based on their past interactions or evaluations. Many reputation systems weight recent evaluations more than old ones and thus produce a time-dependent ranking. For example, the generic reputation formula presented by Sabater and Sierra <cit.> assigns evaluation W_i∈[-1, 1] (here -1, 0, and +1 represent absolutely negative, neutral, and absolutely positive evaluation, respectively) weight proportional to f(t_i, t) where t_i is the evaluation time and t is the time when reputation is computed. The weight function f(t_i, t) should be chosen to favor evaluations that were made short before t; a simple example of such a function is f(t_i, t) = t_i / t. Sabater and Sierra <cit.> proceed by analyzing an artificial scenario where a user first behaves reliable until reaching a high reputation value and then starts to commit fraud. Thanks to assigning high weight to recent actions, their reputation systems is able to quickly reflect the change in the user's behavior and correspondingly lower the user's reputation. Instead of weighting the evaluations on the basis of the time when they were made, it is also possible to order evaluations by their time stamps t(W_1) < t(W_2) < … < t(W_N) and then assign weight λ^N-i to evaluation i. This scheme is referred to as forgetting as it corresponds to gradually lowering the weight of evaluations as more recent evaluations arrive <cit.>. When λ=1, no forgetting is effectively employed. When λ=0, the most recent evaluation alone determines the result. Note that the time information is also extensively used in otherwise time-unaware reputation systems to detect spammers and other anomalous and potentially malicious behavior <cit.>.§.§ Model-based ranking of nodes The ranking algorithms described up to now make assumptions that are regarded as plausible and rarely tested on real systems. For instance, indegree assumes that a node is important, or central, if it is able to acquire many incoming links; PageRank assumes that a node is important if it is pointed by other important nodes; rescaled PageRank assumes that a node is important if its PageRank score exceeds PageRank scores of nodes of similar age. To evaluate the degree to which these node centrality definitions are reliable, one can a posteriori evaluate the performance of the ranking algorithms in identifying high-quality nodes in the network or in predicting the future evolution of the system (real-world examples will be discussed in section <ref>).On the other hand, if one has a mathematical model which well describes how the network evolves, then one can assess node importance by simply fitting the model to the given data. This section will review this approach in detail. We will focus on preferential-attachment models where each node is endowed with a fitness parameter <cit.>, which represents the perceived importance of the node by other nodes in the network.Ranking based on the fitness model Bianconi and Barabasi <cit.>, extended the original Barabasi-Albert modelby assigning to each node an intrinsic quality parameter, called fitness. Node fitness represents the nodes' inherent ability to acquire new incoming links. In the model (hereafter fitness model),the probability Π_i(t) that a new link will be attached to node i at time t is given by Π_i(t)∼ k^in_i(t) η_i,where η_i represents node i's fitness. This attachment rule leads to the following equation for the dependence of the expected node degree on time⟨ k^in_i(t) |=⟩m(t/t_i)^β(η_i),where if the support of the fitness distribution is bounded, β(η) is solution of the self-consistent equation of the form β(η)=η/C[β(η)] and C[β] only depends on the chosen fitness distribution (see <cit.> for the details). Remarkably, the model is flexible and can reproduce several degree distributions by appropriately choosing the functional form of the fitness distribution <cit.>.Using Eq. (<ref>), we can estimate node fitness in real data. To do this, it is sufficient to follow the degree evolution k^in(t) of the nodes and properly fit k^in(t) to a power-law to obtain the exponent β, which is proportional to node fitness. This procedure has been applied by Kong et al. <cit.> to data from the WWW, leading to two interesting conclusions: (1)websites' fitness distribution is narrow; (2) fitness distribution is not dependent on time,and can be well approximated by an exponential distribution.Ranking based on the relevance model While the fitness model describes the growth of information networks better than the original Barabasi-Albert model <cit.>, it is still an incomplete description of degree evolution for many evolving systems.The missing element in the fitness model is node aging <cit.>: the ideas contained in scientific papers are incorporated in subsequent papers, reviews and books, which results in a gradual decay of papers' attractiveness to new citations. The relevance model introduced by Medo et al. <cit.> incorporates node aging by assuming Π_i(t)∼ k^in_i(t) η_i f(t-t_i),where f(t-t_i) is a function of node age which tends to zero <cit.> or to a small constant value <cit.> for large age. For an illustration of node relevance decay in real data from Movielens and Netflix, see Fig. <ref> and <cit.>. Assuming that the normalization factor of Π_i(t) converges to a finite value Ω^*, one can use Eq. (<ref>) to show that⟨k^in(t)|≃⟩exp[η_i∫_0^t f(t')t' / Ω^*].The value of Ω^* depends on the distribution of node fitness and the aging function (see <cit.> for details).Eq. (<ref>) can be fitted to real citation data in different ways. Medo et al. <cit.> computed the relevance r_i(t) of node i at time t as the ratio between the fraction l_i(t) of links obtained by node i in a suitable time window and the fraction l_i^PA(t) expected from pure preferential attachment. That is, one writesr_i(t)=l_i(t)/l_i^PA(t)where l_i^PA(t)=k_i^(in)(t)/∑_ik^(in)_j(t). A fitness estimate for node i is then provided by the node's total relevance T_i=∑_tr_i(t); similarly to the fitness distribution found in the WWW <cit.>,the total relevance distribution has an exponential decay in the APS citation data <cit.>. Alternatively, one can estimate node fitness through standard maximum likelihood estimation (MLE) <cit.>. Wang et al. <cit.> used MLE to analyze the citation data from the APS and Web of Science, and found that the time-dependent factor f(t) follows a universal lognormal formf(t)=1/√(2π t)σ_iexp[-(log(t)-μ_i)^2/2 σ_i^2]where μ_i and σ_i are parameters specific to the temporal citation pattern of paper i. By fitting the relevance model with Eq. (<ref>) to the data,the authors estimate paper fitness and use this estimate to predict the future impact of papers (we refer to <cit.> and paragraph <ref> for more details). Medo <cit.> uses MLE to compare various network growth models with respect to their ability to fit data from the Econophysics forum by Yi-Cheng Zhang [<www.unifr.ch/econophysics>], and the Relevance Model turns out to better fit the data with respect to other existing models. Burst behaviors based on relevance model Due to burst behavior is proposed <cit.> for Wikipedia like systems, We use Relevance model to illustrate the burst effect in bipartite networks which is also importance application in online E-commerce systems. Two representative real video sampled datasets from MovieLens and Netflix showed in table <ref> are analyzed here.Before we start our discussion on video dataset, the online rating system can be represented by a bipartite network with temporal information, which includes notation mostly the same as section 2.7. [a user set (expressed as U(t)), a video set (represented by O(t)) andusers and videos (expressed as L(t)). The popularity of the video k_α(t) is defined as the number of ratings received for α at time t. ] Please note the link in the bipartite network connecting the user i and the video α represents the history of l_iα (∈ L).MovieLens is an online video referral site that invites users to evaluate videos. Netflix website is a DVD rental service provider, users can also vote on video. MovieLens and Netflix rating records are from 1 to 5. Based on user history, we can build a user-video bipartite network.Movielens dataset we used have 69,878 users, 10,681 videos and 10,000,054 links from 1995 to 2009. The other datasets Netflix, we have 480,189 users, 17,770 videos and 100,480,507 from 1999 to 2005.From these datasets, we split the time aggregation network into different rating snapshots, the time unit interval is one month in Netflix, 102 days in MovieLens respectively. we find MovieLens and Netflix have 50 and 74 snapshots datasets.In order to quantitatively measure the difference between actual and expected over time, <cit.> take into account the ratio between the actual and expected ratios defined as the correlation of the relevance <cit.>.R_i(t+1)=Δ k_i(t+1)/Δ k_i(t+1)_(PA)=Δ k_i(t+1)/Δ L(t+1)k_i(t)/L(t) =Δ k_i(t+1)/Σ_jΔ k_j(t+1)k_i(t)/L(t)=Δ k_i(t+1)/Σ_jΔ k_j(t+1)k_i(t)/ Σ_j k_j(t) =Δ k_i(t+1)L(t)/k_i(t)Δ L(t+1).where Δ k_i(t+1)_(PA) calculates the temporal prevalence of the video i owned by the time t+1.Δ L(t+1) means that all the video's temporal popularity is added at time t to the online system,L(t)means popularity which owned by the total video until time t.In reality, the video i actually has Δ k_i(t+1) at time t+1. This expression is clearly understood that if R_i(t)=0, the video i is not popular at time t. If 0<R_i(t)<1, the video i obtain temporal popularity is less than expected at time t. If R_i(t) ≥1, the video i is at time t gained temporal popularity exceeds the expected. In particular, when R_i(t) ≫1, the video i gets more temporal popular than expected at time t.Videos from 2000 to 2008 at MoveLens and released in Netflix from 2000 to 2005 are selected. In Fig <ref>, it is easy to find almost all of the relevance values of the video undergo exponential decay near the appearing time in the website, which indicates that the popularity of the video is unexpected, but then the burst decay is very fast, even after severalunit time. After the burst, we can observe that the relevance decay is very slow or even produces a fixed relevance value (R = 1).The Eq. <ref> is given by the R = 1 representation of the popularity of the video according to the expected number of the preferential attachment(Δ k_i(t+1)_(PA)= Δ L(t+1)k_i(t)/L(t)). The findings based on a novel citation temporal model–relevance model <cit.>, it indicate that the dynamic of online bipartite user-objects popularity are characterized by the burst behaviors with a new perspective (i.e. far exceed preferential popularity increase). Typically, the emergence of burst behaviors occur in the early life span of online videos, and later restrict to the classic preferential popularity increase mechanism. It is an interesting finding which could inspire new ranking system research to consider this phenomenon.Ranking based on a model of PageRank score evolution In the previous paragraph, we discussed various procedures to estimate node fitness based on the same general idea: if we know the relation between node degree and node fitness, we can properly fit the relation to the data to infer the fitness values. In principle, the same idea can find application for network centrality metrics other than degree. For PageRank score s, for example, one can assume a growth model in the form <cit.> s_i(t)=exp[∫_0^tdt α_i(t)],where the behavior of the function α_i(t) has to be inferred from the data.Berberich et al. <cit.> assume that α_i(t) is independent of time within the system time span [t_min,t_max] and show that this assumption is reasonable in the academic citation network they analyzed. With this assumption, Eq. (<ref>) becomess_i(t)=s_i(t_0) exp[α_i(t-t_min)],where α_i(t)=α_i if t∈[t_min,t_max]. A fit of Eq. (<ref>) to the data allows us to estimate α_i, which is referred to as the BuzzRank score of node i Eq. (<ref>) is conceptually similar to Eq. (<ref>) for the RM: the growth of nodes' centrality score depends exponentially on a quantity – node fitness η in the case of the RM, node growth rate α in the case of BuzzRank – that can be interpreted as a proxy for node's success in the system. § RANKING NODES IN TEMPORAL NETWORKS In the previous sections, we have illustrated several real-world and model-based examples that show that the temporal ordering of interactionsis a key element that cannot be neglected when analyzing an evolving network. For instance, we have seen that some networks exhibit a first-mover advantage (paragraph <ref>), a diffusion process on a network can be strongly biased by the network's temporal linking patterns (paragraph <ref>), and neglecting the ordering of interaction may lead to wrong conclusions when evaluating the social influence between two individuals (paragraph <ref>). Various approaches to solve the consequent drawbacks of static centrality metrics (such as indegree and PageRank) have been presented in section <ref>. In particular, we have introducedmetrics (such as the rescaled metrics and CiteRank) that take as input not only the network's adjacency matrix A, but also the time stamps of the edges and/or the age of the nodes. These approaches take into account the temporal dimension of the data. They are useful for their ability to suppress the bias of static metrics (see section <ref>) and to highlight recent contents in growing networks (see section <ref>).However, in the previous sections, we have focused on unweighted networks,where at most one interaction occurs between two nodes i and j. This assumption is limiting in many real scenarios: for example, individuals in social networks typically interact repeatedly (e.g. through emails, mobile phone calls, or text messages), and the number and order of interactions can bring important information on the strength of a social tie. In this section, we focus on systems where repeated node-node interactions can occur and the timing of interactions plays a critical role. To study this kind of systems, we adopt now the temporal networks' framework <cit.>. A temporal network 𝒢^(temp)=(N,E^(temp)) is completely specified by the number N of nodes in the system and the list E^(temp) of time-stamped edges (or contacts) (i,j,t) between the nodes[For some systems, each edge is also endowed with a temporal duration. For our purposes, by assuming that there exist a basic time unit u in the system, edges endowed with a temporal duration can be still represented through time-stamped edges. We simply represent an edge that spans the temporal interval [t_1,t_2] as a sequence of (t_1-t_2)/u subsequent time-stamped edges.]. In the literature, temporal networks are also referred to as temporal graphs <cit.>, time-varying networks/graphs <cit.>, dynamicnetworks/graphs <cit.>, scheduled networks <cit.>, among others. §.§ Getting started: how to represent a temporal network? Recent studies <cit.> have pointed out that projecting a temporal network 𝒢^(temp) to a static weighted (time-aggregated) network discards essential information and, as a result, often yields misleading results.To properly deal with the temporal nature of temporal networks, there are (at least) two possible strategies. The first is to revisit the way the network representation of the data is constructed. While in principle several representation of temporal networks can be adopted (see <cit.> for an overview), a simple and useful representation is the higher-order network representation of the temporal network, (also referred to as memory network representation in the literature).Differently from the time-aggregated representation which only preserves the number of contacts between pairs of nodes for a given temporal network, a higher-order network representation <cit.> transforms the edge list into a set of time-respecting paths (paragraph <ref>) on the network's nodes, and it preserves the statistics of time-respecting paths up to a certain topological length (details are provided below). Standard ranking algorithms (such as PageRank or closeness centrality) can be thus run on higher-order networks (paragraph <ref>). Higher-order networks are particularly powerful in highlighting the effects of memory on the speed of network diffusion processes (paragraph <ref>) and on the ranking of the nodes (paragraphs <ref> and <ref>).The second possible strategy is to divide the system's time span into temporal slices (or layers) of equal duration, and consider a set {w(t)} of adjacency matrices, one for each layer – we refer to this representation as the multilayer representation of the original temporal network <cit.>. The original temporal network is then represented by a number of static networks where usual network analysis methods can be employed. Note that the information encoded in the multilayer representation is exactly the same as that present in the original temporal network only if the temporal span of each layer is equal to the basic time unit of the original dataset. The problem of choosing a suitable time span for the temporal layers is a non-trivial and dataset-dependent problem. In this review, we do not address this issue; the reader can find interesting insights into this problem in <cit.>, among others. In the following, when presenting methods based on the multilayer representation of temporal networks, we simply assume that each layer's duration has been chosen appropriately and, as a result, we have a meaningful partition of the original data into temporal layers. Diffusion-based ranking methods based on this construction are described in paragraph <ref>. We also present generalizations to temporal networks of centrality metrics based on shortest paths (paragraph <ref>).§.§ Memory effects and time-preserving paths in temporal networks Considering the order of events in a temporal network has been shown to be fundamental to properly describe how different kinds of flow processes (e.g., flow of information, flow of traveling passengers) unfold on a temporal network. In this section, we present a paradigmatic example that has been provided by Rosvall et al. <cit.> and highlights the importance of memory effects in a temporal network. The example is followed by a general definition of time-preserving paths for temporal networks, which constitute a building block for the causality-preserving diffusive centrality metrics that are discussed in section <ref>. An example The example shown in Fig. <ref> is taken from <cit.> and well exemplifies the role of memory in temporal networks. The figure considers two datasets: (1) the network of cities connected by airplane passengers' itineraries and (2) the JSTOR [<www.jstor.org>] network of scientific journals connected by citations between journals' articles. We consider first the network of cities. The Markovian assumption <cit.> is that a flow's destination depends only on the current location, and not on its more distant history. According to this assumption, if we consider the passengers who started from Seattle and passed through Chicago, for example, the fraction of them coming back is only 14% (Fig. <ref>a). One might conjecture that this is a poor description of real passenger traffic as travelers are usually based in a city and are likely to come back to the same city at the end of their itineraries. To capture this effect, one needs to consider a “second-order” description of the flow. This simply means that instead of counting how many passengers move from Chicago to Seattle, we count how many move to Seattle, given that they started in a certain city. We observe that the fraction of passengers who travels from Chicago to Seattle, given that they reached Chicago from Seattle, is 83%, which is much larger than what was expected under the Markovian assumption (14%). Similar considerations can be made for the other cities (Figs. <ref>a-b). To capture memory effects in the citation network of scientific journals, Rosvall et al. <cit.> analyze the statistics of triples of journals j_1→ j_2→ j_3 resulting from triples of articles a_1→ a_2→ a_3 such that a_1 cites a_2 and a_2 cites a_3, and paper a_i is published in journal j_i.Based on this construction, Rosvall et al. <cit.> show that memory effects heavily impact flows of citations between scientific journals (Figs. <ref>c-d). We can conclude that including memory is critical for the prediction of flows in real world (see <cit.>and paragraph <ref>).Time-respecting paths The example above shows that when building a network representation from a certain dataset, memory effects need to be taken into account to properly describe the system. To account for memory, in the example of traveling passengers, we are no more interested in how many passengers travel between two locations, but in how many passengers follow a certain (time-ordered) itinerary across different locations. To enforce this paradigm shift, let us introduce a general formalism to describe time-preserving(or time-respecting, or time-ordered) paths <cit.>. Given a static network and three nodes a,b,c, the existence of the links (a,b) and (b,c) directly implies that a is connected to c through at least one path a→ b→ c (transitivity property). The same does not hold in a temporal network: for a node a to be connected with node c through a path a→ b→ c, the link (a,b,t_1) must happen before link (b,c,t_2), , t_1≤ t_2. Time-aggregated networks thus overestimate the number of paths between pairs of nodes – Lentz et al. <cit.> present an elegant framework to quantify this property. The first general definition of time respecting path of length n is the following: (i_1,i_2,t_1), (i_2,i_3,t_2), …, (i_n, i_n+1,t_n) is a time-respecting path if and onlyif t_1< t_2 < …<t_n <cit.>.In many practical applications, we are only interested in dynamical processes that occur at timescales much smaller than the total time over which we observe the network. For this reason, the existence of a time-respectingpath {(i_1,i_2,t_1), (i_2,i_3,t_2)} with a long waiting time between t_1 and t_2 may not be informative at all about the influence of node i_1 on i_3. A practical solution is to set a threshold δ for the maximum acceptable inter-event time <cit.>. Accordingly, one says that {(i_1,i_2,t_1), (i_2,i_3,t_2), …, (i_n, i_n+1,t_n)} is a time-respecting path (with limited waiting time δ) if and only if 0<t_n+1- t_n≤δ. The suitability of a particular choice of δ depends on the system being examined. While too small δ values can result in a too sparse network from which it is hard to extract useful information, too large δ values can include in the network also paths that are not consistent with the real flows of information. In practice, values as different as six seconds (used for pairwise interactions in an ant colony <cit.>) and one hour(used for e-mail exchanges between employees in a company <cit.>) can be used for δ, depending on the context.§.§ Ranking based on higher-order network representations In this paragraph, we use the notion of a time-preserving path presented in the previous paragraph to introduce centrality metrics built on higher-order representation of temporal networks. Before introducing the higher-order representations (paragraph <ref>), let us briefly revise the Markovian assumption. Most of the methods presented in this paragraph are implemented in the Pythonlibrary <pyTempNets> [available at <https://github.com/IngoScholtes/pyTempNets>].§.§.§ Revisiting the Markovian assumption When studying a diffusion process on a (weighted) time-aggregated network, standard network analysisimplicitly assumes the Markovian property: the next move of a random walker does not depend on the walker's previous steps, but only on the node i where he is currently located and on the weights of the links attached to node i. Despite being widely used in network centrality metrics, community detection algorithms and spreading models, this assumption can be unrealistic for a wide range of real systems (see the two examples provided in Fig. <ref>, for example).Let us revisit the Markovian assumption in general terms. In the most general formulation of a walk on a time-aggregated network,the probability P^(n+1)_i that a random walker is located node i at (diffusion-)time n+1 depends on the whole past. Using the same notation as <cit.>, if we denote by X_n the random variable which marks the position of the walker at time n, we writeP_i^(n+1)=P( X_n+1=i | X_n=i_n,X_n-1=i_n-1, …, X_1=i_1,X_0=i_0),where X_0=i_0 is the initial position of the walker and i_1,…,i_n are all his consequent past positions. The Markovian assumptionP_i^(n+1)=P(X_n+1=i|X_n=i_n)=P(X_1=i|X_0=i_n),which implies that the probability of finding a random walker at node i at time t+1 is given by P^(n+1)_i=∑_j P(j→ i) P^(n)_j=∑_j P_ij P^(n)_j,whereP_ij:=P(j→ i)= P(X_t+1=i|X_t=j)=W_ij/∑_k W_kjare the elements of the (column-stochastic) transition matrix P of the network (P_ij=W_ij/∑_k W_kj, where W_ij is the observed number of times a passenger traveled from node j to node i) [Note that we are also assuming that the random walk process is stationary, i.e., the transition probabilities do not depend on diffusion time n. This assumption still holds for the random walk process based on higher-order network representations presented below (paragraphs <ref>-<ref>), whereas it does not hold for random walks based on multilayer representations of the underlying time-stamped data (paragraph <ref>).].As we have seen in section <ref>, PageRank and its variants build on the random walk defined by Eq. (<ref>) and add an additional element, the teleportation term. While the time-dependent PageRank variants defined in section <ref> are based on a different transition matrix than the original PageRank algorithm, they also implicitly assume the Markovian property. We now shall discuss how to relax the Markovian assumption and construct higher-order Markov models where also the previous trajectory of the walker influences where the walker will jump next. §.§.§ A second-order Markov model Following <cit.>, we explain in this section how to build a second-order Markov process on a given network. Consider a random walker located at a certain node j. According to the second-order Markov model, the probability that the walker will jump from node j to node i at time n+1 depends also on the node k where the walker was located at the previous step of the dynamics. In other words, the move j⃗i⃗ from node j to i actually depends on the edge k⃗j⃗. It follows that the second-order Markov model can be represented as a first-order Markov model on edges in the network. That is to say, the random walker jumps and locates not on nodes, but on edges. The probability P(j→ i,n+1|k→ j, n) that the walker performs a jump from node j to node i at time n+1 after having performed a jump from node k tonode j at time n can be written asP(j→ i,n+1|k→ j, n)=P(k⃗j⃗→j⃗i⃗)= W(k⃗j⃗→j⃗i⃗)/∑_j⃗l⃗W(k⃗j⃗→j⃗l⃗),where W(k⃗j⃗→j⃗i⃗) denotes the observed number of occurrences of the directed time-respecting path k⃗j⃗→j⃗i⃗ in the data – with the notation introduced in paragraph <ref>, k⃗j⃗→j⃗i⃗ denotes here any length-two path (k,j,t_1)→ (j,i,t_2) such that 0<t_2-t_1≤δ.Eq. (<ref>) allows us to study a non-markovian diffusion process that preserves the observed frequencies of time-respecting paths of length two. This non-markovian diffusion process can be interpreted as a markovian diffusion process on the second-order representation of the temporal network (also referred to as the second-order network or memory network) whose nodes (also referred to as memory nodes or second-order nodes) are the edges of the original network.For this reason, one can use standard static network tools to simulate diffusion and compute the corresponding node centrality (see below) on second (and higher) order networks. Importantly, in agreement with Eq. (<ref>), the edge weights in the second-order network are the relative frequencies of the time-respecting paths of length two <cit.>.The probability of finding the random walker at memory nodej⃗i⃗ at time n+1 is thus given byP^(n+1)(j⃗i⃗)=∑_kP(k⃗j⃗→j⃗i⃗)P^(n)(k⃗j⃗).The probability P_i^(n+1) of finding the random walker at node i at time n+1 is thusP_i^(n+1)=∑_j P^(n+1)(j⃗i⃗)=∑_jkP(k⃗j⃗→j⃗i⃗) P^(n)(k⃗j⃗).The random walk based on the second-order model presented in this paragraph is much more predictable than a random walk based on a network without memory, in the sense that the next step of the walker, given its current position, has much smaller uncertainty as measured by conditional entropy (this holds for all the temporal-network datasets studied by <cit.>).An attentive reader may have noticed that while constructing the higher-order temporal network has the advantage to allow us to use standard network techniques, it comes at the price of an increased computational complexity. Indeed, in order to describe a non-Markovian process with a Markovian process, we had to increase the dimension of the system: the second-order network is comprised of E>N memory nodes, where E is the number of edges that have been active at least once during the whole observation period. This also points out that the problem of increased complexity requires more attention for dense networks than for sparse networks.§.§.§ Second-order PageRank In this paragraph, we briefly discuss how the PageRank algorithm can be generalized to higher-order networks. The PageRank score s(j⃗i⃗) of a memory node j⃗i⃗ can be defined as the corresponding component of the stationary state of the following process <cit.>s(j⃗i⃗,n+1)=c ∑_ks(k⃗j⃗,n) P(k⃗j⃗→j⃗i⃗)+ (1-c) ∑_lW_il/∑_lmW_lm,where the term multiplied by 1-c is the component i of a “smart” teleportation vector (see paragraph <ref>). The second-order PageRank score of node i, s_i, is defined ass_i=∑_js(j⃗i⃗).The effects of second-order flow constraints of the ranking of scientific journals by PageRankwill be presented in paragraph <ref>. §.§.§ The effects of memory on diffusion speed From a physicist's point of view, it is interesting to study how the temporal order of interactions affects the speed of diffusion on the network. Recent literature used standard models for spreading processes <cit.>, simulated random walks <cit.>, and graph Laplacian spectral properties <cit.>, to show that considering the timing of interactions can slow down spreading.However, diffusion can also speed up when memory effects are included. A general analytic framework introduced by Scholtes et al. <cit.> predicts whether including second-order memory effects will slow-down or speed-up diffusion.The main idea is to exploit the property that the convergence time of a random walk process is related to the second largest eigenvalue λ_2 of its transition matrix.First, one constructs the transition matrix T^(2) associated to the second-order Markov model of the network. It can be proven <cit.>, that for any stochastic matrix with eigenvalues 1=λ_1, λ_2, …, λ_m, and for any ϵ>0, the number of diffusion steps after which the distance between the vector p^(n) of visitation frequencies and the random-walk stationary vector p becomes smaller than ϵ is proportional to 1/log(λ_2).To establish whether second-order memory effects will slow down or speed up diffusion, one thus constructs a randomized second-order transition matrix T̃^(2) where the expected frequency of physical paths of length two is consistent with the time-aggregated network. The second largest eigenvalue of the resulting matrix is denoted as λ̃_2. The analytic prediction for the change of diffusion speed is thus given byS(T̃^(2)):=logλ̃_2/logλ_2.Values of S(T̃^(2)) larger (smaller) than one predict slow-down (speed-up) of diffusion. The resulting predictions accurately match the results of diffusion simulations on real data (see Fig. 2 in <cit.>). §.§.§ Generalization to higher-order Markov models and prediction of real-world flows In paragraph <ref>, we have considered random walks based on a second order Markov model. It is natural to generalize this model to take into account higher order dependencies. If we are interested, for example, in a third-order Markov model, we could simply build a higher-order network whose memory nodes are triples of physical nodes <cit.>. The memory node ijk would then represent a time-respecting path i→ j→ k. Xu et al. <cit.> consider a more general higher-order model where, after a maximum order of dependency is fixed, all orders of dependencies can coexists (see Fig. <ref>C). The authors use global shipping transportation data to show that by considering more than two steps of memory, the random walk is even more predictable, which results in an enhanced accuracy in out-of-sample prediction of real-world flows (Fig. <ref>D).Interestingly, Xu et al. <cit.> also discuss the effects of including memory effects on ranking websites in the World Wide Web. Similarly to Eq. (<ref>), the authors define the higher-order PageRank score of a given node i as the sum of the PageRank scores of higher-order nodes that represent node i. Fig. <ref> shows that despite being positively correlated, standard and higher-order PageRank bring substantially different information on node importance. Since higher-order PageRank describes user traffic better than standard PageRank, one could conjecture that it provides a better estimate of website relevance; however, further research is needed to test this hypothesis and investigate possible limitations of the higher-order network approach.The possibility of including all order of dependencies up to a certain order leads to a natural question: is there an optimal order to reliably describe a temporal-network dataset? Ideally, we would like our model to describe the system accurately enough with the smallest possible memory order; in other words, an ideal model would have good predictive power and, at the same time, low complexity. The problem of determining the optimal order is non-trivial and it is addressed by Scholtes <cit.> with maximum likelihood estimation techniques. The statistical framework introduced by Scholtes <cit.> allows us to determine the optimal order k_opt to describe a temporal network, thereby distinguishing among systems for which the traditional time-aggregated network representation is satisfactory (k_opt=1) and systems for which temporal effects play an essential role (k_opt>1). §.§ Ranking based on a multilayer representation of temporal networks The higher-order Markov models presented above are particularly convenient because: (1) they preserve the temporal order of interactions up to a certain order; (2) centrality scores on the respective networks can be simulated by means of standard network techniques. However, one can instead consider the temporal network as a sequence of temporal snapshots, each of which is composed of the set of links that occurred within a temporal window of given length <cit.>. Within this framework, a temporal network composed of N nodes and of total temporal duration T is represented by L adjacency matrices {w(t)} (t=1,…, L) with dimension N× N, where L is the number of temporal snapshots the total observation time is divided into. The element w_ij(t) of the adjacency matrix for the snapshot t represents the number of interactions between i and j within the time period [t,t+T/L). The set of adjacency matrices is typically referred to as an adjacency tensor <cit.>.§.§.§ TempoRankBuilding on the random walk process proposed by Starnini et al. <cit.>, Rocha and Masuda <cit.> design a random walk-based ranking method for undirected networks, called TempoRank. The method assumes that at each diffusion time t, the random walker only “sees” the network edges active within the temporal layer t (w_ij(t)=1 for an edge (i,j) active at layer t), which means that there is an exact correspondence between the diffusion time and the real time – diffusion and network evolution unfold with the same time-scale. This is very different from diffusion-based static ranking algorithms which implicitly assume that diffusion takes place at a much faster timescale than the time scale at which the network changes.With respect to the random walk processes studied in <cit.>, TempoRank features a “sojourn” (a temporary stay) probability according to which a random walker located at node i has the probability q^k_i(t) of remaining located at node i, where k_i(t)=∑_jw_ij(t) is the number of edges of node i in layer t. The (row-stochastic) transition matrix P_ij(t) of the random walk at time t is given byP_ij(t)= δ_ij ifk_i(t)=0, q^k_i(t) ifk_i(t)>0, i=j, (1-q^k_i(t)) w_ij(t)/k_i(t)ifk_i(t)>0, i≠ j.It can be shown that a non-zero sojourn probability (q>0) is sufficient for the convergence of the random walk process if and only if the time-aggregated network is connected; by contrast, if q=0, the convergence of the random walk process to a stationary vector is not guaranteed even if the time-aggregated network is connected.The need for a non-zero sojourn probability is motivated in paragraph 2.3 of <cit.> and well illustrated by the following example. Consider a network of three nodes 1,2,3 whose time-aggregated representation is triangular (w_12=w_23=w_13=1). Say that only one edge is active at each temporal layer: w_12(1)=w_21(1)=w_23(2)=w_32(2)=w_13(3)=w_31(3)=1, while all the other w_ij(t) are equal to zero. For this simple temporal network, if q=0, a random walker starting at node 1 will always end at node 1 after three steps, which implies that the random walk is not mixing. This issue is especially troublesome for temporal networks with fine temporal resolution for which there exist only a limited number of possible pathways for the random walker, and it is solved by imposing q>0. We remark that by defining the sojourn probability as q^k_i(t), we assume that the random decision whether the walker located at node i will move (probability q) or not (probability 1-q) is made independently for each edge.It is important to study the asymptotic properties of the walk defined by Eq. (<ref>). However, since TempoRank's diffusion time is equal to the real-system time t, it is not clear a priori how the random walk should behave after L diffusion steps, , after the temporal end of the dataset is reached. To deal with this ambiguity, Rocha and Masuda <cit.> impose periodic boundary conditions on the transition matrix: P(t+L)=P(t). With this choice, when the last temporal layer of the dataset t=L is reached, the random walker continues with the oldest temporal layer t=1. The random walk process that defines the TempoRank score thus runs cyclically multiple times on the dataset. Below, we denote by m the number of cycles performed by the random walk on the dataset; we are interested in the properties of the random walk in the limit of large m. The one-cycle transition matrix P^(temp, 1c) is defined asP^(temp, 1c) = ∏_t=1^LP(t)=P(1)…P(L). The TempoRank vector after n time steps is thus given by the leading left eigenvector[We choose the left-eigenvector here because the transition matrix defined by Eq. (<ref>) is row-stochastic.] s(n) of matrix P^(temp)(n), where P^(temp)(n) is given by the (periodic) time-ordered product of n=m L+t transition matrices P(t).This eigenvector corresponds to the stationary density of the random walk after m cycles over the dataset and t additional temporal layers. Even when m is large, the vector s(n) fluctuates within a cycle due to the displacements of the walker along the network temporal links. The TempoRank vector of scores s is thus defined as the average within one cycle of the leading eigenvector of the matrix P^(temp)(n) for large ms=1/L∑_t=1^Ls(m L+t)As shown in <cit.>, the TempoRank score is well approximated by a local time-aware variable, referred to as the in-strenght approximator, yet how this result depends on network's structural and temporal patterns remains unexplored.Importantly, TempoRank scores are little correlated with the scores obtained with the time-aggregated network representation. This finding constitutes yet another evidence that node centrality computed with a temporal network representation can substantially differ from node centrality computed with the time-aggregated representation, whichis in a qualitative agreement with the findings obtained with higher-order Markov models <cit.>.§.§.§ Coupling between temporal layersTaylor et al. <cit.> generalizes the multilayer approach to node centrality by including the coupling between consecutive temporal network layers. The aim of this coupling is to connect each node with itself between two consecutive layers and torule the magnitude of variations of node centrality over time. Each node's centrality for a certain temporal layer is thus dependent not only on the centrality of its neighbors in that layer, but also on its centrality in the precedent and subsequent temporal layer. The method can be in principle applied to any structural centrality metric. A full description of the mathematics of the method involves tensorial calculations and can be found in <cit.>. The method has been applied to rank mathematics departments and the Supreme Court decisions.In the case of mathematics departments, the regime of strong coupling between layers represents the basic idea the prestige of a mathematics department should not widely fluctuate over time. This method provides us with an analytic framework to identify not only the most central nodes in the network, but also the “top-movers”, i.e., the nodes whose centrality is rising most quickly. §.§.§ Dynamic network-based scoring systems for ranking in sport As shown by Motegi and Masuda <cit.>, both Park and Newman's win-lose score <cit.> and Radicchi's prestige score <cit.> can be generalized to include the temporal dimension. There are at least two main reasons to include time in sport ranking: (1) old performances are less indicative of the players' current strength <cit.>;(2) it is more rewarding to beat a certain player when he/she is at the peak of his/her performance than doing so when he/she is a novice or short before retirement <cit.>. To account for the former effect, some time-aware variants of structural centrality metrics simply introduce time-dependent penalization for older interactions <cit.>, in the same spirit as the time-dependent algorithms presented in paragraph <ref>.In the dynamic win-lose score introduced by Motegi and Masuda <cit.>, the latter effect is taken into account by assuming that the contribution to player i's score coming from a win against player j at time t is proportional to the score of player j at that time t. The equations that define the vector w of win scores and the vector l of lose scores are provided in <cit.>.By assuming that we have a reasonable partition of time into temporal layers 1,…, t – Motegi and Masuda <cit.> set the temporal duration of each layer to one day –, the vectorof dynamic win scores w(t) at time t is defined asw(t)=W(t) e,whereW(t)=A(t)+^-β∑_m_t∈{0,1}α^m_t A(t-1) A(t)^m_t++ ^-2 β∑_m_t, m_t-1∈{0,1}α^m_t+m_t-1A(t-2) A(t-1)^m_t-1 A(t)^m_t++…++ ^-β (t-1)∑_m_2,…, m_t∈{0,1}α^∑_j=2^tm_jA(1) A(2)^m_2…A(t)^m_t.The vector of loss scores l(t) is defined analogously by replacing A(t) by A(t). As in paragraph <ref>, the dynamic win-lose score s(t) is defined as the difference s(t):=w(t)-l(t).We can understand the rationale behind this definition by considering the first terms of Eq. (<ref>). The contributions to i's score that come from wins at a past time t', with t'≤ t, are modulated by a factor exp[-β(t-t')], where β is a parameter of the method. Similarly to Park and Newman's static win-lose score, also indirect wins determine player i's score; differently from the static win-lose score, only indirect wins that happened after player's i respective direct win are included. With these assumptions, the contributions to player i's score at time t that come from the two past time layers t-1 and t-2 arew_i(t)= ∑_jA_ij(t)++e^-β ∑_jA_ij(t-1)+ α e^-β ∑_j,kA_ij(t-1)A_jk(t)++e^-2 β ∑_jA_ij(t-2) + α e^-2 β ∑_j,kA_ij(t-2)A_jk(t)++ α e^-2 β ∑_j,kA_ij(t-2)A_jk(t-1)++ α^2 e^-2,β ∑_j,k,lA_ij(t-2)A_jk(t-1)A_kl(t)+𝒪(e^-3 β)where 𝒪(e^-3 β) terms correspond to wins that happened before time t-2. Motegi and Masuda <cit.> applied different player-level metrics to the network of tennis players, and found that the dynamic win-lose score has a better predictive power than its static counterpart by Park and Newman.§.§ Extending shortest-path-based metrics to temporal networks Up to now, we have only discussed diffusion-based metrics for temporal networks. It is also possible to generalize shortest-path-based centrality metrics to temporal networks, as it will be outlined in this paragraph. §.§.§ Shortest and fastest time-respecting paths It is natural to define the length of a static path as the number of edges traveled through the path. As we have seen in paragraph <ref>, the natural way to enforce the causality of interactions is to consider time-respecting paths instead of static paths. There are two possible ways to extend the notion of shortest paths to time-respecting paths. In analogy with the definition of length for static paths, one can simply define the length of a time-respecting path as the number of edges traversed along the path. This is called the topological length of path {(i_1,i_2,t_1), (i_2,i_3,t_2), …, (i_n, i_n+1,t_n)} which is simply equal to n <cit.>. Alternatively, one can also define the temporal length of path {(i_1,i_2,t_1), (i_2,i_3,t_2), …, (i_n, i_n+1,t_n)} which is the time distance between the first and the last path's edge, i.e., t_n-t_1+1 <cit.>.Hence, the two alternatives to generalize the notion of shortest path to temporal networks correspond to choosing the time-respecting paths with the smallest topological length <cit.> or those with the smallest temporal duration <cit.>, respectively. The former and latter choice lead to the definition of shortest and fastest time-respecting paths, respectively. More details and subtleties associated to the definition of a distance in temporal networks can be found in <cit.> in paragraph 4.3.§.§.§ Temporal betweenness centralityStatic betweenness score of a given node is defined as the fraction of geodesic paths that pass through the node (see section <ref>). It is natural to extend this definition to temporal networks by considering the fraction of shortest <cit.> or fastest <cit.> time-respecting paths that pass through the node.Denoting by σ^(i)(j,k) the number of shortest (or fastest) time-respecting paths between j and k that pass through node i, we can define the temporal betweenness centrality B^(temp)_i of node i as<cit.>B^(temp)_i=∑_j≠ i ≠ kσ^(i)(j,k).Temporal betweenness centrality can be also normalized by the total number of shortest paths that have not i as one of the two extreme nodes <cit.>. A different definition of temporal betweenness centrality, based on fastest time-respecting paths, has been introduced by Tang et al. <cit.>.§.§.§ Temporal closeness centralityIn section <ref>, we defined static closeness score of a given node as the inverse of the mean geodesic distance of the node from the other nodes in the network. In analogy with static closeness, one can define the temporal closeness score of a given node as the inverse of the temporal distance (see section <ref>) of the node from the other nodes in the network. Let us denote the temporal distance between two nodes i and j at time t as d^(temp)_ij – here temporal distance d^(temp)_ij is a placeholder variable to represent either the topological length of the shortest paths[In this case, the use of the adjective "temporal" might feel inappropriate since we are using it to label a topology-related quantity. However, the adjective "temporal" reflects the fact that we are only considering time-respecting paths.] or the duration of the fastest paths between i and j, the temporal closeness score C^(temp)_i of node i thus reads <cit.>C^(temp)_i=N-1/∑_j≠ id^(temp)_ij.This definition suffers from the same problem encountered for Eq. (<ref>): if two nodes are not connected by any temporal path within the observation period, their temporal distance is infinite which results in zero closeness score. In analogy with Eq. (<ref>), to overcome this problem, closeness score can be redefined in terms of the harmonic average of the temporal distance between the nodes <cit.>C^(temp)_i=1/N-1∑_j≠ i1/d^(temp)_ij.In general, the correlation between temporal and time-aggregated closeness centrality is larger than the correlation between temporal and time-aggregated betweeness centrality <cit.>. This is arguably due to the fact thatnode closeness depends only on paths' length, whereas betweeness centrality takes also into account the paths' actual structure, which is deeply affected when considering the temporal ordering of interactions <cit.>. A different definition of temporal closeness based on the average of the temporal duration of fastest time-respecting paths is provided by Tang et al. <cit.>. Computing centrality metrics defined by Eqs. (<ref>) and (<ref>) preserves the causal order of interaction, but requires a detailed analysis of all the time-respecting paths in the network. On the other hand, centrality metrics based on higher-order aggregated networks (section <ref>) can becomputed through static methods which are computationally faster. A natural question arises: can we use metrics computed on higher-order time-aggregated networks to approximate temporal centrality metrics? A first investigation has been carried out by Scholtes et al. <cit.>, who showthat static betweenness and closeness centrality computed on second-order time-aggregated networks tend to better approximate the corresponding temporal metrics than the respective static metrics computed on the first-order time-aggregated network. These findings indicate that taking into account the first few markovian orders of memory could be sufficient to approximate reasonably well the actual temporal centrality metrics. § TEMPORAL ASPECTS OF RECOMMENDER SYSTEMSWhile most of this review concerns with constructing a general ranking of nodes in a network, there are situations where such a general ranking is of a limited applicability. This is particularly relevant in bipartite user-item networks where users' personal tastes and preferences play an important role. For example, a reader is more likely to be interested in a new book by their favorite author than in a book of a recently announced laureate of the Nobel Prize in literature. For this reason, one seeks not to establish a general ranking of all items in the network but rather multiple personalized item rankings—ideally as many of them as there are different users in the system. This is precisely the goal of recommender systems that aim to use data on users' past preferences to identify the items that are likely to be appreciated by a given individual user in the future <cit.>.The common approach to recommendation is static in nature: the input data are considered without taking time into account. While this typically produces satisfactory results, there is now increasing evidence that the best results are obtained by considering the time information <cit.>. Winners of several recent high-profile machine-learning competitions, such as the NetflixPrize <cit.>, have emphasized the importance of taking temporal dynamics of the input data into account <cit.>. In this section, we present the role played by time in recommendation.§.§ Temporal dynamics in matrix factorizationRecommendation methods have been traditionally classified as memory and model based <cit.>. Memory-based methods rely on selecting a set of similar users (“neighbors”) for every user and recommend items on the basis of what the neighbors have favored in the past. Model-based methods rely on formulating a statistical model for user behavior and than use this model to choose which items to recommend. However, this classification now seems outdated as modern matrix factorization methods combine the two approaches: they are model based (Equations (<ref>) and (<ref>) below are examples of how the ratings can be modeled), yet they also have memory-based features as they sum over all memory-stored ratings <cit.>.Motivated by its good performance in many practicalcontexts <cit.>, we describe here a recommendation methodbased on matrix factorization. The input data of this method take form of ratings given by users to items; there are U users and I items in total. The method assumes that the rating of user i to item α as r_iα is an outcome of a matching between the user's preferences and the item's properties.In particular, both users and items are assumed to be representablein a latent (, hidden) space of dimension D; the vectors corresponding to user i and item α are p_i and q_α, respectively. Assuming thatthe vectors for all users and items are known, a yet unexpressed rating is estimated asr̂_iα = p_i ·q_α.In other words, we factorize the sparse rating matrix R into a product of two matrices—the U× D matrix P with user taste vectors and the I× D matrix Q with item property vectors as R = PQ. This simple recipe can be further extended to include other factor such as, for example, which items have been actually rated by the users <cit.>. The increased model complexity is then justified by an improved prediction accuracy.Of course, the vectors are initially unknown and need to be learned (estimated) from the available data. The learning procedure is formally represented as an optimization problem where the task is to minimize the difference between the estimated and actually expressed ratings∑_r_iα∈ℛ (r_iα - p_i·q_α)^2where the sum is over ratings r_iα in the set of all ratings ℛ. Since this is a high-dimensional optimization problem, it is useful to employ some regularization technique to prevent over-fitting. This is usually done by adding regularization terms that penalize user and item vectors with large elements∑_r_iα∈ℛ (r_iα - p_i·q_α)^2 + λ(p^2+q^2)where parameter λ is determined by cross-validation (i.e, by comparing the predictions with a hidden part of data) <cit.>. While Eq. (<ref>) presents a seemingly difficult optimization problem with many parameters, simple optimization by gradient descent works well and quickly converges to a good solution. This optimization method can be implemented efficiently because partial derivatives of Eq. (<ref>) with respect to elements of vectors p_i and q_α can be written down analytically (see <cit.> for a particular example).When modeling ratings in a bipartite user-item system, it is useful to be more specific than the simple model provided by Eq. (<ref>). A common form <cit.> of estimated rating isr̂_iα = μ + d_i + d_α + p_i ·q_αwhere μ is the overall average rating, and d_i and d_α are the average observed deviations of ratings given by user i and to item α, respectively. The interpretation of this “separation of effects” is straightforward: the rating of a strict user to a generally badly rated movie is likely to be low unless the match between the user's taste and the movie's features turns out to be particularly favorable. A more detailed discussion of the baseline estimates can be found in <cit.>. It is useful to separate the baseline part of the predictionr̂_iα = μ + d_i + d_αfor a comparison in recommendation evaluation which can tell us how much the present matrix factorization method can improve over this elementary approach.While generally well-performing, the above-described matrix factorization framework is static in nature: all ratings regardless of their age enter the estimation process with the same weight and all parameters are assumed constant. In particular, even when estimated parameter values can change when the input data changes (because of, for example, accumulating more data), a given set of parameter values is assumed to model all available input data—the oldest and the most recent alike. An analysis of the data shows that such a stationarity assumption is generally not satisfied. In the classical NetflixPrize dataset, for example, two non-stationary features become quickly apparent. First, the average item rating has increased suddenly by approximately 0.3 (in the 1–5 scale) in 2004. Second, item rating generally increases with item age. Koren <cit.> connected the former effect with improving the function of the Netflix's internal recommendation system and, also, the company's change of word explanations given to different ratings (from “superb” to “loved it” for rating 5, for example). Similarly, the latter effect is linked with young items being chosen at random at a higher proportion than old items for which the users apparently have more information and only decide to consume and rate them when the chances are good that they might like the item.To include time effects, the baseline prediction provided by Eq. (<ref>) can be readily generalized to the formr̂_iα = μ + d_i(t) + d_α(t)where the difference terms d_i(t) and d_α(t) are assumed to change with time. How to model the time effects that influence d_i(t) and d_α(t) is an open and context-dependent problem. In any given setting, there are multiple sources of variation with time that can be considered (such as a slowly-changing appreciation of a movie, rapidly-changing mood of a user, and so forth). Which of them have significant impact on the resulting performance is best decided by formulating a complex model and measuring the prediction precision when components of this model are “turned on” one after another. The evolution of slowly-changing features can be modeled by, for example, dividing the data into bins. If time t falls in bin B(t), we can write d_α(t) = d_α + d_α, B(t) where the overall difference d_α is complemented with the difference in a given bin. By contrast, the bins needed to model fast-changing features need to be very short and therefore suffer from poor statistic for each of them; Fast-changing features are thus better modeled with smooth functions. For example, the gradual shift in a user's bias can be modeled by writing d_i(t) = d_i + δ_i sign(t - t_i) | t - t_i|^β where t_i is the average time stamp of i's ratings. As before, parameter β is to be set by cross validation. For further examples on which temporal effects to model and how, see <cit.> for an example.We can now finally come back to the matrix factorization part and introduce time effects there. Out of the two set of computed vectors—user preferences p_i and item features q_α, it is reasonable to assume that only the user preferences change. To keep things simple, we follow here the same approach as above to model user differences d_i(t). Element k of i's preference vector thus becomesp_ik = p_ik + δ_ik sign(t - t_i) | t - t_i|^β.The complete prediction thus readsr̂_iα(t) = μ + d_i(t) + d_α(t) + p_i(t) ·q_α.Note that there are many variants of matrix factorization and also many ways how to incorporate time in them. For the sake of simplicity, we have omitted here the day-specific effect which models the fact that a user's mood on a given single day can be anything from pessimistic through neutral to optimistic and the predictions should include this. When this effect is included in the model, the accuracy of results can improve considerably (see <cit.> for details). The matrix model can be further modified to include, for example, the very fact which items have been chosen and rated by the user. Koren <cit.> has shown that this modification also improves the prediction accuracy.To achieve further improvements, several well-performing methods which can be generally very diverse and unrelated can be combined in a unique predictor <cit.>; in the literature, this is referred to as “blending” or “bagging”. Notably, some sort of method combination have been employed by all successful teams in the NetflixPrize <cit.>.This is a sign that to obtain ultimate achievements, there is no single method that outperforms all the others but rather that all methods capture some part of the true user behavior. §.§ Temporal dynamics in network-based recommendationA popular approach to recommendation is based on representing the input data with bipartite networks and studying various diffusive processes on them <cit.>. Given a set of ratings {r_iα} given by users to items (the same ratings that served as input for the matrix factorization methods in the previous section), all ratings above a chosen threshold are represented as links between the respective users and items. When computing the recommendation scores of items for a given user i in the original “probabilistic spreading” method <cit.>, one assigns one unit of “resource” to all items connected with the given user and zero resource to the others. Theresource is then propagated to the user side and back in a random walk-like process and the resulting amounts of resource on the item side are interpreted as item recommendation scores; items with high score are assumed to be likely to be appreciated by the given user i. The main difference from matrix factorization methods is that network-based methods assume binary user-item data as input[This makes the methods applicable also in systems where the users give no explicit ratings but only attach themselves with items that they have bought, watched, or otherwise interacted with.] and use physics-motivated processes on them.Despite high activity in this field <cit.>, little attention has been devoted to the role that time plays in the recommendation process. While it has been noticed, for example, that the most successful network-based methods tend to recommend popular items <cit.>, this has not been explicitly connected with the fact that to gain popularity, items need time and therefore the recommended items are likely to be old. The role of time in network-based recommendation has been directly discussed by Vidmer and Medo <cit.>. The main contribution of that paper is two-fold. First, it points out a deficiency of the usual evaluation procedure that is based on removing links from the network at random and examining how they are reproduced by a recommendation method. Random removal of links targets preferentially popular items—the evaluation procedure thus shares the bias that is innate to network-based evaluation methods which then leads to an overestimation of the methods' performance. To make the evaluation more fair and also more relevant from the practical point of view, the latest links need to be removed instead. All the remaining (preceding) links are then used to reconstruct “near future” of the network. Note that this is fundamentally different from the evaluation based on random link removal where both past and future links are used to reconstruct the missing part.As shown in <cit.>, measured performance of recommendation indeed deteriorates once time-based evaluation is employed. The main reason why network-based methods fail in reproducing the most recent links in a network is that while the methods generally favor popular items, the growth of real networks is typically determined not only by node popularity but also by node age that helps recent nodes to gain popularity despite the presence of old and already popular nodes <cit.>. In other words, the omnipresent preferential attachment mechanism <cit.> is modulated with other influences—node age and node fitness, above all. The difference between the age of items attached with the latest links and the age of items scoring highly in a classical network-based recommendation method is illustrated in Figure <ref>.This suggests that the performance of network-based methods could be improved by enhancing the standing of recent nodes in the recommendation lists. While there are multiple ways how to achieve that, Vidmer and Medo <cit.> present a straightforward combination of a traditional method's recommendation scores with item degree increase in a recent time period (due to the aging of nodes, high degree increase values tend to be achieved by recent nodes; this is also illustrated in Figure <ref>). Denoting the recommendation score of item α for user i at time t as f_iα(t) and the degree increase of item α over the last τ time steps as Δ k_α(t, τ), the best-performing modified score wasf_iα(t)' = f_iα(t) Δ k_α(t,τ)/k_α(t).Besides multiplying the original score with the recent degree increase, we divide here with the current item degree k_α(t) which aims to correct the original method's bias towards popular items (both baseline methods utilized in <cit.>, preferential spreading and similarity-preferential diffusion, have this bias). The above-described modification significantly improves recommendation accuracy (as measured by, for example, recommendation recall) and, at the same time, it tends to recommend less popular items than the original baseline methods (and thus follows the long-lasting pursuit for diversity-favoring recommendation methods <cit.>). The described modified method is very elementary; How best to include time in network-based recommendation methods is thus still an open question.§.§ The impact of recommendation ranking on network growthOnce we succeed in finding recommendation methods with satisfactory accuracy, the natural question to ask is what would happen if the users would actually follow the recommendations. In other words, once input data are used to compute recommendations, what is the potential effect of the recommender system on the data? This can be answered by coupling a recommendation method with a simple agent-based model where a user can become active, choose an item (based on the received recommendations or at random), and thus establish a new link in the network that affects the future recommendations for this user as well as, indirectly, for all the others.As first shown by Zeng et al. <cit.>, iterating recommendation results tends to magnify popularity differences among the items with popular items benefiting most from recommendation. To be able to study long-term effects of iterating recommendation, Zeng et al. <cit.> proposes to simulate a closed system where new links are added and, to prevent the bipartite user-item network from eventually becoming complete, the oldest links are removed. The authors show that popularity-favoring recommendation methods generally lead to a pathological stationary state where only a handful of items occupy the attention of all users. This concentration of the users' attention is measured by the Gini index which is the most commonly used measure of inequality <cit.>; its values of zero and one correspond to the perfectly equal setting (where in our case, the degree of all items is the same) and the maximal inequality (where one item monopolizes all the links), respectively. Figure <ref> shows the stationary value of the item degree Gini coefficient that emerges by iterating the above-described network rewiring process. The authors use the popular ProbS-HeatS hybrid method <cit.> which has one parameter, θ, that can be used to tune from accurate and popularity-favoring recommendations for θ=0 to little accurate and diversity-favoring recommendations for θ=1. As can be seen in the figure, θ values of below approximately 0.6 produces recommendations of high precision but the stationary state is very unequal. By contrast, θ close to one produces recommendations of low precision and a low Gini coefficient in the stationary state. Figure <ref> shows a hopeful intermediate regime where the sacrifice of some little fraction of the recommendation precision can lead to a stationary state with substantially lower Gini coefficient. This kind of analysis is important as it questions an important limiting factor of information filtering algorithms and suggests possible ways how to overcome it. § APPLICATIONS OF RANKING TO EVOLVING SOCIAL, ECONOMIC AND INFORMATION NETWORKS While complex (temporal) networks constitute a general mathematical framework to describe a wide class of real complex systems, intrinsic differences between the meaning of nodes and their relationships across different datasets make it impossible to devise a unique, all-encompassing metric for node centrality. This has resulted in a plethora of static (section <ref>) and time-dependent (sections <ref> and <ref> centrality metrics, each of them based on specific assumptions on what important nodes represent. We are left with a fundamental question: given a complex, possibly time-varying, network, which metric shall we use to quantify node importance in the system?The answer certainly depends on the goals we aim to achieve with a given metric. On the other hand, for a specific goal, the relative performance of two metrics could be very different for different datasets. This makes it necessary to take a close look at the data, establish suitable and well-defined benchmarking criteria, and proceed to performance evaluation. The goal of this section is to present some case studies where this evaluation procedure can be carried out, and to show the benefits from including the temporal dimension into ranking algorithms and/or into the evaluation procedure.In this section, we focus on three applications of node centrality metrics: ranking of agents (papers, researchers, and journals) in academic networks (paragraph <ref>), prediction of economic development of countries (paragraph <ref>), and link prediction in online systems (paragraph <ref>).§.§ Quantifying and predicting impact in academic networks Scientific citation data are a popular object of network study for three main reasons. Firstly, accurate citation records are available from a period that spends over more than hundred years (the often-used APS data start in 1893). Secondly, all researchers have a direct experience with citing and being cited, which makes it easier for them to understand and study this system. Thirdly, researchers are naturally keen on knowing how is their work perceived by the others, for which paper citation count is an obvious but rather simplistic measure. Over time, a whole discipline of scientometrics has developed whose goal is to study the ways of quantifying the research impact at the level of individual papers and research journals, as well as individual researchers and research institutions <cit.>. We review here particularly those impact quantification aspects that are strongly influenced by the evolution (in this case largely growth) of the underlying scientific citation data.§.§.§ Quantifying the significance of scientific papers The most straightforward way to estimate the impact of a scientific paper is by computing its citation count (, its in-degree in the citation network). One of the problems of citation count is that it is very broadly distributed <cit.> which makes it difficult to interpret and use it further to, for example, evaluate researchers. For example, if researcher A authored two papers with citation counts 1000 and 10, respectively, and researcher B authored two papers with citation counts 100 and 100, respectively, the average citation count of authored papers points clearly in favor of author A. However, it is long known that the self-reinforcing preferential attachment process strongly contributes to the evolution of paper citation count <cit.> which suggests that one should approach high citation counts with some caution. Indeed, a recent study shows that the aggregation of logarithmic citation counts leads to better identification of able authors than the aggregation of original citation counts <cit.>. The little reliability of a single highly-cited paper was one of the reasons that motivated the introduction of the popular h-index that aims to estimate the productivity and research impact of authors.To estimate paper impact beyond the mere citation count is therefore an important challenge. The point that first begs an improvement is that while the citation count weights all citations equally, the common sense tells us for that a citation from a highly influential paper should weight more than a citation from an obscure paper. The same logic is at the basis of PageRank <cit.> (see paragraph <ref>), that has been therefore applied on the directed network of citations among scientific papers <cit.>. While PageRank can indeed identify some papers (referred to as “scientific gems” by Chen et al. <cit.>) whose PageRank score is high in comparison with their citation count <cit.>, the interpretation of results is made difficult because of the PageRank's strong bias towards old papers.The age bias of PageRank in growing networks is natural and well understood <cit.> (see paragraph <ref> for details). It can be countered by either introducing explicit penalization of old nodes (see the CiteRank algorithm described in Section <ref>) or by a posteriori rescaling the resulting PageRank scores as proposed in <cit.> (see paragraph <ref> for a detailed description of the rescaling procedure). The latter approach has the advantage of not relying on any model or particular features of the data—it is thus widely applicable without the need for any alterations.After demonstrating that the proposed rescaling procedure indeed removes the age bias of citationimpact metrics, Mariani et al. <cit.> proceeded by validating the resultingpaper rankings. To this end, they used a small group of 87 milestone papers that have been selected in2016 by editors of the prime American Physical Society (APS) journal, Physical Review Letters, onthe occasion of the journal's 50th anniversary [see the accompanying website <http://journals.aps.org/prl/50years/milestones>]. All these papers have been published between 1958 (when Physical Review Letters have been founded) and 2001. The editors choosing the milestone letters thus acted in 2015 with the hindsight of 15 years and more, which adds credibility to their choice.To compare the rankings of papers by various metric, one can use the input data to produce the rankings and then assess the milestone papers' positions in each of them. Denoting the milestone papers with i=1,…,P and the ranking of the milestone paper i by metric m=1,…,M as r_i^(m), one can for example compute the average ranking of all milestone papers for each metric, r^(m)_MS=1/P∑_i r_i^(m). A low value of r^(m)_MS indicates that metric m assigns the milestone papers good (low) positions in the ranking of all papers. The results shown in Figure <ref>a indicate that the original PageRank, followed by its rescaled variant and the citation count, are the best metrics in this respect.However, using the complete input data to compare the metrics inevitably provides a particular view: it only reflects the ranking of milestone papers after a fixed rather long time period of time (recall that many milestone papers are now more than 30 years old). Furthermore, a metric with strong bias towards old papers (such as PageRank, for example), is given undue advantage which makes it difficult to compare its result with another metric which has a different level of bias. To overcome these problems, Mariani et al. <cit.> provide a “dynamical” evaluation where the evolution of each milestone's ranking position is followed as a function of its age (time since publication). The simplest quantity to study in this way is the identification rate: the fraction of milestone papers that appear in the top 1% of the ranking as a function of the time since publication (one can of course choose a different top percentage). This dynamical approach not only provides us with more detailed information but also avoids being influenced by the age distribution of the milestone papers. The corresponding results are shown in Figure <ref>. There are two points to note: time aware quantities, rescaled PageRank R(p) and CiteRank T, rank the milestone papers better than time unaware quantities, citation count c and PageRank p, in the first 15 years after publication. Second, network-based quantities, p and R(p), outperform their counterparts based on simply counting the incoming citations, c and R(c), which indicates that taking the complete network structure of scientific citations is indeed beneficial for our ability to identify significant scientific contributions. As shown recently by Ren et al. <cit.>, it is not given that PageRank performs better than citation count: no performance improvement is found, for example, in the movie citation network constructed and analyzed by Wasserman et al. <cit.>. Ren et al. <cit.> argue that the main difference lies in the lack of degree-degree correlations in the movie citation networks, which in turn undermines the basic assumptions of the PageRank algorithm. The interactive website <sciencenow.info> makes the results obtained with rescaled PageRank and other metrics on regularly updated APS data (as of the time of writing this review, the database includes all papers published until December 2016) publicly available. The very recent paper on the first empirical observation of gravitational waves, for example, reaches a particularly high rescaled PageRank value of 28.6 which is only matched by a handful of papers.[Recall that the R(p) value quantifies by how many standard deviations a paper outperforms other papers of similar age.] Despite being very recent, the paper is ranked 17 out of 593,443 papers, as compared to its low rankings by the citation count (rank 1,763) and PageRank (rank 12,277). Among the recent papers that score well by R(p) are recent seminal contributions to the study of graphene (The electronic properties of graphene from 2009) and topological insulators (two papers from 2010 and 2011); both these topics have recently received Nobel Prize in physics. In summary, rescaled PageRank allows one to appraise the significance of published works much earlier than other methods. Thanks to its lack of age bias, the metric can be also used to select both old and recent seminal contributions to a given research field.The rescaling procedure has also its pitfalls. First, it rewards the papers with immediate impact and, conversely, penalizes the papers that need long time to prove their worthiness, such as the so-called sleeping beauties <cit.>.Second, the early evaluation of papers with rescaled quantities naturally results in an increased rate of false positives—papers that due to a few quickly obtained citations initially reach high values of R(p) and R(c) but these values substantially decrease later[A spectral-clustering method to classify the papers according to their citation trajectories has been recently proposed by Colavizza and Franceschet <cit.>. According to their terminology, papers with large short-term impact and fast decay of attractiveness for new citations are referred to as sprinters, as opposed to marathoner papers that exhibit slow decay.].If we take into account that those early citations can be also “voices of disagreement”, it is clear that early rescaled results need to be used with caution. A detailed study of these issues remains a future challenge. §.§.§ Predicting the future success of scientific papers When instead of paper significance, we are only interested in the eventual number of citations we may follow the approach demonstrated by D. Wang et al. <cit.> where the network growth model with preferential attachment and aging <cit.> was calibrated on citation data and the aging was found to decay log-normally with time. While the papers differ in the parameters of their log-normal aging decay, these differences do not influence the predicted long-term citation count which is shown to depend only on paper fitness f_i and overall system parameters. If paper fitness is obtained from, for example, a short initial period of the citation count growth, the model thus can be used to predict the eventual citation count. As the authors show, papers with similar estimated fitness indeed achieve much more similar eventual citation count than the papers that had similar citation counts at the moment of estimation. While J. Wang et al. <cit.> point to a lack of the method's prediction power, D. Wang et al. <cit.> explain that this is a consequence of model over-fitting and discuss how it can be avoided.A recent prediction method proposed by Cao et al. <cit.> predicts future citations of papers by building paper popularity profiles and finding the best-matching profile for any given paper; its results are shown to outperform those obtained by the approach by D. Wang et al. <cit.> (see Figure <ref>). Besides machine-learning approaches being traditionally strong when enough data are available, the method is not built on any specific model of paper popularity growth as it directly uses empirical paper popularity profiles. This can be an important advantage, given that the network growth model used in <cit.> certainly lacks some features of the real citation network. For example, Petersen et al. <cit.> considered the impact of author reputation on the citation dynamics and found that the author's reputation dominates the paper citation growth rate of little cited papers. The crossover citation count between the reputation- and preferential attachment-dominated dynamics is found to be 40 (see <cit.> for details). Golosovsky and Solomon <cit.> went one step further this and proposed a thoroughly modified citation growth model built on a copying-redirection-triadic closure mechanism. The use of these models in the prediction of paper success has not been tested yet.A comparison of the paper's in-degree and the appearance time, labeled as rescaled in-degree in paragraph <ref>, has been used by Newman <cit.> to find the papers that are likely to prove highly successful in the future (see a detailed discussion in Section <ref>). Five years later <cit.>, the previously identified outstanding papers were found to significantly outperform a group of randomly chosen papers as well as a group of randomly chosen papers with the same number of citations. To specifically address the question of predicting the standing of papers in the citation network consisting solely of the citations made in near future, Ghosh et al. <cit.> use a model of the reference list creation which is based on following references in existing papers whilst preferring the references to recent papers (see paragraph <ref> for details).An interesting departure from works that build on the citation network among the papers is presented by Sarigol et al. <cit.> who attempt to predict the paper's success solely on the basis of the paper's authors position in the co-authorship network. On a dataset of more than 100,000 computer science papers published between 1996 and 2007, they show that among the papers authored by an author who is in the top 10% by (simultaneously) betweenness centrality, degree centrality, k-core centrality and eigenvector centrality, 36% of these papers are eventually among the top 10% most cited papers five years after publication. By combining the centrality of paper co-authors with a machine-learning approach, this fraction (which represents the prediction's precision) can be further increased to 60% whilst the corresponding recall is 18% (i.e., out of the 10% top cited papers, 18% are found with the given predictor). §.§.§ Predicting the future success of researchersCompared with the prediction of the future success of a scientific publication, the prediction of the future success of an individual researcher is much more consequential because it is crucial in distributing the limited research resources (research funds, permanent positions, etc.) among a comparatively large number of researchers. Will a researcher with a highly cited paper produce more papers like that or not? Early on, the predictive power of h-index has been studied by Hirsch himself in <cit.> with the conclusion that this metric predicts a researcher's success in the future better than other simple metrics (total citation count, citations per paper, and total paper count).It is probably naive to expect that a single metric, regardless of how well designed, can capture and predict the success in such a complex endeavor as research is. Acuna et al. <cit.> attempted to predict the future h-index of researchers by combining multiple features using linear regression. For example, the prediction formula for the next year's h-index of neuroscientists had the form h_+1 = 0.76 + 0.37√(n) + 0.97h - 0.07y + 0.02j + 0.03q where n, h, y, j, and q are the number of papers, current h-index, number of years since the first paper, number of distinct journals where the author has published, and the number of articles in top journals, respectively. However, this work was later criticized by Penner et al. <cit.>. One of the discussed problems is that to predict the h-index is relatively simple as this quantity changes slowly and can only grow. The baseline prediction of unchanged h-index values is thus likely to be rather accurate. According to Penner et al., one should instead aim to predict the change of the author's h-index and evaluate the predictions accordingly; in this way, no excess self-confidence will result from giving ourselves a simple task. Another problem of the original evaluation is that it considered all researchers and thus masked the substantial level of errors specific to young researchers who, at the same time, are the most likely candidates of similar evaluation in evaluations by hiring committees, for example. The young researchers who have published their first first-author paper recently are precisely the target of Zhang et al. <cit.> who use a machine-learning approach to predict which young researchers will be most cited in the near future, and show that their method outperforms a number of benchmark methods.As recently shown by Sinatra et al. <cit.>, the most-cited paper of a researcher is equally likely to appear at the beginning, in the middle, or at the end of the researcher's career. In other words, researchers seem not to become better, or worse, during their careers. Consequently, the authors develop a model which assigns scientists the ability factor Q that multiplicative improves (Q>1) or lowers (Q<1) the potential impact of their next work. The new Q metric is empirically shown to be stable over time and thus suitable for predicting the authors' future success, as documented by predicting the h-index. Note that Q is by definition an age unbiased metric and thus allows to compare scientists of different age without the need for any additional treatment (such as the one presented in Section <ref> for the PageRank metric).The literature on this topic is large and we present here only a handful of relevant works. An interested reader is referred to a very recent essay on data-driven predictions of science development <cit.> summarizes the achieved progress, future opportunities, and the potential impact. §.§.§ Ranking scientific journals through non-markovian PageRank In section <ref>, we have described node centrality metrics for temporal networks. We have shown that higher-order networks describe and predict actual flows in real networks better than memory-less networks (section <ref>). In this section, we focus on a particular application of memory-aware centrality metrics, the ranking of scientific journals <cit.>. Weighted variants of the PageRank algorithm <cit.> attempt to infer the importance of a journal based on the assumption that a journal is important if its articles are cited relatively many times by papers published in other important journals. However, such PageRank-based methods are implicitly built on the Markovian assumption, which may provide a poor description of real network traffic (see section <ref>). As pointed out by <cit.>, this is the case also for multi-disciplinary journals like Science or PNAS which receive citation flows from journals from various fields. If we neglect memory effects,multi-disciplinary journals redistribute the received citation flow toward journals from diverse fields. This effect causes the flow to repeatedly cross field boundaries, which is arguably unlikely to be done by a real scientific reader and, for this reason, makes the algorithm unreliable as a model for scholars' browsing behavior.By contrast, in a second-order Markov model, if a multi-disciplinary journal J receives a certain flow from a journal which belongs to field F, then journal J tends to redistribute this flow to journals that belong to the same field F (see panels c-d of Fig. 4 in <cit.> for an example). This results in a sharp reduction of the flow that crosses disciplinary boundaries (referred to as “flow leak” in <cit.>) and in a clear increase of the within-community flow. As a result, when including second-order effects,multidisciplinary journals (such as PNAS and Science)receive smaller net flow, whereas specialized journals (such as Ecology and Econometrica) gain flow and thus improve their ranking (see <cit.> for the detailed results). This property makes also the ranking less subject to manipulation, which is a worrying problem in quantitative research evaluation <cit.>.As we have just seen, if a paper written in a journal J specialized in Ecology cites a multi-disciplinary journal, this credit is likely to be redistributed across journals of diverse fields in a first-order Markov model. For a memory-less ranking of journals, a specialized journal would be thus tempted to attempt to boost its score through editorialpolicies that discourage citations to multi-disciplinary journals. By contrast, the second-order Markov model suppresses large part of flow leaks which makes it less convenient this strategic citation manipulation. The ranking of journals based on second-order Markov model has also been shown to be more robust with respect to the selection of included journals in the analysis <cit.>, which is an additional positive property for the second-order model.In a scientific landscape where metrics of productivity and impact are proliferating <cit.>, assessing the biases and performance <cit.> of widely used metrics and devising improved ones is critical. For example, the widely used impact factor suffers from a number of worrying issues <cit.>. The results presented by Rosvall et al. <cit.> indicate that the inclusion of temporal effects holds promise to improve the current rankings of journals, and hopefully future research will continue exploring this direction.§.§ Prediction of economic development of countriesThis subsection needs deep revision. First, the number of figures to be reduced. The conclusion should be less trenchant – apart from the hand-waving argument from Rome, based on existence of chaotic and laminar regimes, there is not yet agreement on the superiority of the box-method with respect to regressions (where the FCM performs poorly). Revised as you required, remove some redundant information. We can discuss laterMany modifications are still needed. Fitness-Complexity and method of reflections are already defined in section 2. Paragraphs 7.2.3 and 7.2.4 are separated but actually refer to the same result. The section still needs deep reorganization. A possible logical order could be * refer to section 2 for the definition of the metrics* Start by presenting the general problem (GDP prediction) and standard tools used in economics (e.g. using standard economic indicators as predictor variables)* Discuss the results of Hidalgo's regressions <cit.> which show that MR explains more variance of future GDP than standard economic variables.* Present criticism by Pietronero et al., and introduce their method of analogues* Present our novel results. * For the first notes, I already clean it. * For the second point, I really need some help. you may give me some related papers, I can add one some paragraph. * For the third point, according to Hidalgo it would be things like years of schooling, political stability, regulation and so on. Again, he does not compare it with diversity of the exports (aka degree). In the text, those are already discussed. I will add some text further explain it.* I will add some criticism by Pietronero et al., and introduce their method of analogues * Alex's result only provides a comparison of what already exists in the text. You may have check first. Predicting the future economic development of countries is of crucial importance in economics.The GDP per capita is the traditional monetary variable used by economists to assess a country's development <cit.>. Even though it does not account for critical elements such as income equality, environmental impact or social costs <cit.>, GDP is traditionally regarded as a reasonable proxy for the country's economical wealth, and especially of its industry <cit.>. Twice a year, in April and in October, the International Monetary Fund (IMF) makes projections for the future GDP growth of countries <cit.>. The web page does not detail the precise procedure, but it indicates that it combines together many factors, such as oil price, food price, and others.Development predictions based on complex networks do not aim to beat the projections made by the IMF. They rather aim to extract valuable information from international trade data with a limited number of network-based metrics, attempting thus to isolate the key factors that drive economic development. The respective research area, known as Economic Complexity, has attracted considerable attention in recent years both from the economics <cit.> and from the physics community <cit.>. Two metrics, the Method of Reflections <cit.>, and the Fitness-Complexity metric <cit.>, have been designed to rank countries andproducts that they export in the international trade network. The definitions of the two metrics (and some of their variants <cit.>) can be found in section <ref> and section <ref>, respectively. Both algorithms use a complex network approach to represent the international trade data. One builds a bipartite trade network where the countries are connected with the products that they export. The network is built using the concept of Revealed Compared Advantage, RCA (we refer to <cit.> for details), which distinguishes between “significant” and “non-significant” links in the country-product export network. The resulting network is binary: a country is either considered to be an exporter of a product or it is not. Once we have defined network-based metrics to rank countries and products in world trade, a natural question arises: can we use these metric to predict the future development of countries? This question is not univocal: several prediction-related tasks can be designed and the relative performance of the metrics can be different for different tasks. We shall present two possible approaches below.§.§.§ Prediction of GDP through linear regressionHidalgo and Hausmann <cit.> have employed the standard linear-regression approach to the GDP prediction problem: one designs a linear model where the future GDP, log[GDP(t+Δ t)], depends linearly on the current GDP, log[GDP(t)], and on network-based metrics. The linear-regression equations used by <cit.> readlog[GDP(t+Δ t)]-log[GDP(t)]=a+b_1GDP(t) + b_2 k_i^(n)+b_3k_i^(n+1),where k_i^n denotes the nth-order country Method-of-Reflection score (hereafter MR-score) as determined with Eq. (<ref>), while a, b_1, b_2, b_3 are coefficients to be determined with a fit to the data.The results by <cit.> show that the Method-of-Reflection score contributes to the variance of economic growth significantly more than measures of governance and institutional quality, education system quality, and standard competitiveness indexes such as the World Economic Forum Global Competitiveness Index. We refer the reader to <cit.> for the detailed results, and to <http://atlas.media.mit.edu/en/> for a web platform with rankings by the MR and visualizations of the international trade data. §.§.§ Prediction of GDP through the method of analoguesThe linear-regression approach by <cit.> has been criticized by Cristelli et al. <cit.> who point out that the linear-regression approach assumes that there is a uniform trend to uncover. In other words, it assumes that all the countries respond to a variation in their score in the same way, , with the same regression coefficients.Cristelli et al. <cit.> use the country fitness score defined by Eq. (<ref>) to show that instead, the dynamics of countries in the Fitness-GDP plane is very heterogeneous, and the predictability of countries' future position is only possible in a limited region of the plane. This undermines the very basic assumption of the linear regression technique <cit.>, and calls for a different prediction analysis tool.Cristelli et al. <cit.> borrow a well-known method from the literature on dynamical systems <cit.>, called method of analogues, to study and predict the dynamics of countries in the Fitness-GDP plane. The basic idea of the method is that we can try to predict the future history of the system by using our knowledge on the past history of the system. The main goal of the method is to reveal simple patterns and attempt prediction in systems for which we do not know the laws of evolution. While <cit.> only considers the Fitness-GDP plane, with Fitness determined through Eq. (<ref>), we consider here the score-GDP plane for three different scores: fitness, degree, and the Method-of-Reflections (MR) score[To obtain the MR score, we perform two iterations of Eq. (<ref>). We have checked that the performance of the Method of Reflections does not improve by increasing the number of iterations.].In the following, we analyze the international trade BACI dataset (1998-2014) composed of N=261 countries and M=1241 products[In international trade datasets, different classifications of products can be found. In the BACI dataset used here, products are classified according to the Harmonized System 2007 scheme, which classifies hierarchically the products by assigning them a six-digits code. For the analysis presented in this paragraph, we aggregated the products to the fourth digit.]; from this dataset, we use the procedure introduced by Hidalgo and Hausmann <cit.> to construct, year by year, the country-product network that connects the countries with the products they export. The first insights into the heterogeneity of the country dynamics in the score-GDP plane can be gained through the following procedure: * Split the Fitness-GDP plane (in a logarithmic scale) in equally-sized boxes.* The box in which each country is at a given year is noted, as well as the one at which it is ten years later.* The total number of countries in box i at year t is N^i_t, and the number of different boxes occupied after ten years by countries originating from box i is labeled n^i_t).* For each box, the dispersion 𝒞^i_t= (n^i_t-1)/(N^i_t-1) is computed.Here a box whose all countries will occur in the same box after 10 years has zero dispersion, and a box whose N^i_t countries will occur in N^i_t different boxes after 10 years has the dispersion of one. Therefore, for a given box, a small value of 𝒞^i_t means that the future evolution of countries that are located at that box is more predictable than the evolution of countries located at boxes with large dispersion values.In agreement with the results in <cit.>, we observe (Fig. <ref>) an extended “low-dispersion” (, predictable) region in the Fitness-GDP plane, which corresponds essentially to high-fitness countries. This means that for high-fitness countries, fitness is a reliable variable to explain the future evolution of the system. By contrast, for low-fitness countries, fitness fails to capture the future dynamics of the system and additional variables (, institutional or education quality indicators) may be necessary to improve the prediction performance. Cristelli et al. <cit.> point out that this strongly heterogeneous dynamics goes undetected with standard forecasting tools such as linear regressions. Interestingly, an extended low-dispersion region is also found in the degree-GDP plane, whereas the low-dispersion region is comparatively small in the MR-score-GDP plane. The complex information provided by the dispersion values in a two-dimensional plane can be reduced by computing a weighted mean of dispersion, which weights the contribution of each box with the number of events that fall into that box. The obtained values of the weighted mean of dispersion are 0.41, 0.71, and 0.35 for the degree, Method of Reflections, and Fitness, respectively. These numbers suggest that while both degree and Fitness significantly outperform the predictive power of the Method of Reflections, Fitness is the best metric by some margin. The results are qualitatively the same when a different number of boxes is used. These findings indicate that fitness and degree can be used to make predictions in the score-GDP plane, whereas the Method of Reflections cannot be used to make reliable predictions with the method of analogues. The simple aggregation method used here to quantify the prediction performance of metrics can be in the future improved to better understand the methods' strong and weak points. An illustrative video of the evolution of countries in the Fitness-GDP plane, available at<http://www.nature.com/news/physicists-make-weather-forecasts-for-economies-1.16963>, provides a general perspective on the two above-described economic-complexity approaches to GDP prediction.I grap some thing from Alex's very early info.. I believe this is what we need. This data come from (Guillaume Gaulier and Soledad Zignago. Baci: International trade database at the product-level. the 1994-2007 version. Working Papers 2010-23, CEPII, October, 2010. We keep the following only if we will understand the normalization by Wednesday.While Figure <ref> provide multiple insights into the prediction performance of the evaluated metrics, one may be interested in summarizing the results in an aggregate performance score which would allow for a direct and simple comparison between the metrics. I would like to continue with motivating the predictability score but how to do that… Let's see if we will understand the normalization of P^s or get the data.This qualitative description of the dynamics in the score-GDP plane already provides us with a first intuition about the predictability of the dynamics for the three metrics. However, this result may depend on the chosen number of boxes, as well as on the range of the axis.To take into account this possible effect, we introduce here the following predictability score𝒫^S=𝒫·2 log(N_occ)/n,where 𝒫 = ∑_i,t (1-𝒞^i_t), and n is the number of events used during the computation. The N_occ represent the number of boxes with at least five events (any country was inside at least 5 times), 𝒞^i_t is the predictability score which adapted from <cit.>. The log(N_occ) existed in information theory very often It's not an explanation, still don't get it. Is it related to entropy? Why should the sum of one minus concentration be normalized with maximal entropy? Can we explain this?. The fact that n is important is because only the events that are located in a box with at least five events are taken into account. The inverse of the dispersion is used in the sum, so that a high value of overall predictability 𝒫 means that the prediction is accurate. The motivation for these quantity is that the more box there are, the smaller the predictability is. But it is better if we can predict more accurately countries ending in small regions. The idea is then to quantify the additional information brought by smaller boxes. Matus, can we motivate and explain more clearly each term of this quantity? I struggle to understand the explanation provided after the equation.The results are shown in Fig. <ref> as a function of the number of occupied boxes N_occ. Confirming our intuition from Fig. fig:analogs, degree and fitness are the two best-performing metrics. The MR-score performs much worse, and its predictability score heavily depends on the choice of axis' limits, as it is evident from the large standard deviations for its curves.The predictive power of degree is slightly larger than that of the fitness score; however, it must be noticed that degree provides little information on the importance of products in the network <cit.>, which makes network information indispensable to reliably quantify the score of products. An interesting study of the dynamics of products based on the method of analogues has been recently proposed by Angelini et al. <cit.>.§.§ Link prediction with temporal information Link prediction is an important problem in network analysis <cit.>, which aims to identify the potential missing links in a network. This is particularly useful in settings where to obtain information about the network is costly, such as in gene interaction experiments in biology <cit.>, and it is thus useful to prioritize the experimentation by first studying the node pairs that are the most likely to be actually connected with a link. The problem has its relevance also in the study of social networks <cit.>, for example, where it can tell us about their evolution in the future. There is also an interesting convoluted setting where information diffuses over the social network and this diffusion can influence the process of social link creation <cit.>. A common approach to link prediction is based on choosing a suitable node similarity metric, computing the similarity for all yet unconnected node pairs, and finally assuming that the links are most likely to exist among the nodes that are the most similar by the chosen metric. In evolving networks, node similarity should take into account not only the current network topology but also time information (such as the node and link creation time). The simple common neighbor similarity of nodes i and j can be computed as s_ij=∑_αA_iα A_jα, where 𝖠 is the network's adjacency matrix. See <cit.> for more information on node similarity estimation and link prediction in general.Usual node similarity metrics ignore the time information: the old and new links are assumed to have the same importance, and the same holds for nodes. There are various ways how time can be included in the computation of node similarity. The simple common neighbors metric, for example, can be generalized by assuming that the weight of common neighbors decays exponentially with time, yielding s_ij(t)=∑_n∈Γ(i, t)∩Γ(j, t)^-λ (t-t_ij) where Γ(i,t) is the set of neighbors of node i at time t, t_ij is the time when edge between i and j has been established, and λ is a tunable decay parameter. Liu et al. <cit.> introduce time in the resource allocation dynamics on bipartite networks (first described in <cit.>). Other originally static similarity metrics can be also modified in a similar way <cit.>. Note that in constructing time-reflecting similarity metrics, there exist a number of problems: How to characterize the time-decay parameters, whether edges with different timestamps play different roles in the dynamics, whether and how to include users' potential preference for fresher or older objects, and so on <cit.>.We illustrate now how time can be easily introduced in a recently proposed link prediction method based on a perturbed matrix <cit.>; this time-aware generalization has been recently presented in <cit.>. The original method assumes an undirected network of N nodes. A randomly chosen fraction p^H of its links constitute a perturbation set with the adjacency matrix ΔA, and the remaining links correspond to the adjacency matrix A^R; the complete network's adjacency matrix is A=A^R+ΔA. Since A^R is real and symmetric, it has a set of N orthonormal eigenvectors x_k corresponding to eigenvalues λ_k and can be written as A^R=∑_k=1^N λ_k x_kx_k. The complete adjacency matrix, A, has generally different (perturbed) eigenvalues and eigenvectors that can be written as λ_k+Δλ_k and x_k+Δx_k, respectively. Assuming that the perturbation is small, one can find Δλ_k = (x_kΔAx_k)/(x_kx_k) and A can be approximated asà = ∑_k=1^N (λ_k + Δλ_k) x_kx_k,where we kept the perturbed eigenvectors unchanged. This matrix can be considered as a linear approximation (extrapolation) of the network's adjacency matrix based on A^R. As discussed in <cit.>, the elements of à can be considered as link score values in the link prediction problem.However, the original method divides training and probe sets randomly regardless of time, implying an unattainable scenario that future edges are utilized to predict future edges. It is more appropriate to choose the most recent links to constitute the perturbation set (again, the fraction p^H of all links is chosen). To evaluate link prediction in a system where time plays a strong role, one must not choose the probe at random, but instead divide the network by time so that strictly the future links are there to be predicted (see a similar discussion in Sec. <ref>). To reflect the temporal dynamics of the network's growth, and thus appropriately address the time-based probe division, Wang et al. introduce the “activity” of node i as s_i = k_i^H / k_i where k_i and k_i are the degree of node i in the perturbation set and the complete data <cit.>, respectively. Node activity s_i aims the capture the trend of i's popularity growth: whether it is on a rise or it already slows down. In addition, active nodes are more likely to be connected by a link than little active ones.Here elements of the eigenvector x_k can be modified by including node activity asx_k,i'= x_k,i(1+α s_i)where α is a tunable parameter (by setting α=0, node activity is ignored and the original eigenvectors are recovered). The modified eigenvectors can be plugged in (<ref>), yielding the time-aware perturbed matrixÃ' = ∑_k=1^N(λ_k+Δλ_k)x_k'x_k'.Similarly as before, the elements of Ã' can be interpreted as link prediction scores for the respective links.In <cit.>, a number of different link prediction methodsis compared on four distinct datasets: Hypertext data <cit.>,Haggle data <cit.>, Infec data <cit.> andUcSoci data <cit.>). Figure <ref> summarizes the resultsobtained by the authors and shows that taking the time evolution of node popularity into account dramatically improves the prediction performance. Apart from the methods introduced above, several furthertime-aware approaches to link prediction have beenproposed recently, such as similarity fusion of multilayer networks <cit.>,real-world information flow feature in link prediction <cit.>, and others. Comparing with state-of-art static similarity methods, time preferential based SPM (PBSPM for short) takes the time evolution of node popularity. The four state-of-art similarity methods are Common Neighbors (CN), Adamic-Adar (AA), Resource Allocation (RA), Katz similarity, Superposed Random Walk (SRW), and the original Structural Perturbation Method (SPM). In four temporal real networks (named Hypertext <cit.>, Haggle <cit.>, Infec<cit.> and UcSoci <cit.>), PBSPM not only improves the accuracy of link prediction (see table <ref>), but also this idea also can inspire future policy design maker to establish a link between pairwise active nodes rather than inactive nodes, such as power grid infrastructure, or road network. There are always missing clue to establish connectivity in developing country to build their infrastructure. This result here may give some hints for them.This paragraph has been rewritten and could be removed. Wang et al. <cit.> evaluate the performance of several prediction methods on four distinct real datasets: (1) Hypertext: a network of face-to-face contacts of the attendees of the ACM Hypertext 2009 conference <cit.>, (2) Haggle: an undirected network representing contacts between people measured by carried wireless devices <cit.>, (3) Infec: a network describing the face-to-face behavior of people during the exhibition INFECTIOUS: Stay away in 2009 <cit.>, (4) UcSoci: a directed network of messages between the users of an online community of students from the University of California, Irvine <cit.>. Out of the evaluated methods, four assign score to links based on link centrality metrics (Common Neighbors, CN; Adamic-Adar, AA; Resource Allocation, RA; Katz similarity), one is based on random walk (Superposed Random Walk, SRW), the original Structural Perturbation Method (SPM), and finally the above-described Temporal Structural Perturbation Method (TSPM). See the original reference <cit.> for details of the evaluation procedure. Table <ref> compares the methods' performance, revealing that taking the time evolution of node popularity into account dramatically improves the method's performance. Can you say something interesting about the results beyond this?Considering the time evolution of node popularity, not only improves the accuracy of link prediction, but also this idea also can inspire future policy design maker to establish a link between to active nodes rather than inactive nodes, such as power grid infrastructure, or road network. There are always missing clue to establish connectivity in developing country to build their infrastructure. This result here may give some hints for them. Furthermore, table <ref> shows the average popularity of the endpoints of predicted edges, in which PBSPM achieves highest scores in the four networks. High popularity means that more edges are predicted between active nodes, rather than between inactive nodes. I do not understand what this quantity is – also reading the paper was not very helpful in this respect. Could you explain? This metric characterizes the activeness of the predicted edges. The main reason that we do not use the timestamps of predicted edges is that training and probe sets are divided according to time. All the predicted edges are future edges, and wrong predicted edges has no timestamps. Therefore we can only characterize the activeness of endpoints of predicted edges. If two edges have a common endpoint, we calculates the common endpoint twice. § PERSPECTIVES AND CONCLUSIONSAfter reviewing the recent progress in the actively studied problem of node ranking in complex networks, the reader may arguably feel overwhelmed by the sheer number and variability of the available static and time-aware metrics. Indeed, the understanding which metric to use in which situation is often lacking and one relies on “proofs” of metric validity that are often anecdotal, based on a qualitative comparison between the rankings provided by different metrics <cit.>. This issue is quite worrying as a wrongly used metric can have immediate real consequences as is the case, for example, for quantitative evaluation of scientific impact, where the used metrics have consequences on funding decisions and academic careers. Waltman <cit.> points out that since the information that can be extracted from citation networks is inherently limited, researchers should avoid introducing new metrics unless they bring a clear added value to existing metrics. We can only add that this recommendation applies equally to other fields where metrics tend to multiply.Coming back to the issue of metric validation, we believe that extensive and thorough performance evaluation of (both static and time-aware) network ranking metrics is an important direction future research. Of course, any “performance” evaluation depends on which task we want to achieve with the metric. We outline possible methods to validate centrality metrics in paragraph <ref>.Whichever validation procedure we choose for the metrics, we cannot neglect the possible bias of the metrics and their impact on the system's evolution. In this review, we focused on the bias by age of static algorithms and possible ways to counteract it. However, the temporal dimension is not the only possible source of bias for ranking algorithms. We discuss other possible sources of information bias and their relevance in our highly-connected world in paragraph <ref>. §.§ Which metric to use? Benchmarking the ranking algorithms Let us briefly discuss three basic ways that can be used to validate network-based ranking techniques. Prediction of future edges While prediction is vital in physics where wrong predictions on real-world behavior can rule out competing theories,it has not yet gained a central role in social and economic sciences <cit.>. Even though many metrics in the last years have been evaluated according to their predictive power,as outlined in several paragraphs of section <ref>, wide differences among the adopted predictive evaluation procedures make it often impossible to assess the relative usefulness of the metrics. In agreement with <cit.>, we believe that an increased standardization of prediction evaluation would foster a better understanding of the predictive power of the different metrics.With the growing importance of data-driven prediction <cit.>,benchmarking ranking algorithms through their predictive power may deepen our understanding about which metrics better reproduce the actual nodes' behavior and eventually lead to useful applications, perhaps in combination with data-mining techniques <cit.>. Using external information to evaluate the metrics External information can be beneficial both to compare different rankings and to interpret which properties of the system they actually reflect. For example, network centrality metrics can be evaluated according to their ability to identify expert-selected significant nodes.Benchmarks of this kind include (but are not limited to) identification of expert-selected movies <cit.>,identification of awarded conference papers <cit.> or of editor-selected milestone papers <cit.>, identification of researchers awarded with international prizes <cit.>.The rationale behind this kind of benchmarking is that if a metric well captures the importance of the nodes, it should be able to rank at the top the nodes whose outstanding importance is undeniable.Results on different datasets <cit.> show that there is no unique metric that outperforms the others in all the studied problems. An extensive thorough investigation of which network properties make different metrics fail or succeed in a given task is a needed step in future research. First steps have been done in <cit.>, which introduces a time-aware null model to reveal the role of network structural correlations in determining the relative performance of local and eigenvector-based metrics in identifying significant nodes in citation networks.Expert judgment is not the only source of external information that can be used to validate the metrics. For example, Smith et al. <cit.> and Mao et al. <cit.> have applied network centrality metrics to mobile-phone communication networks and compared the region-level scores produced by the metrics with the income levels of the region. The effectiveness of centrality metrics can be also validated according to some prior knowledge of the nodes' function in the network. For example, the scores by network centrality metrics have been compared with known neuronal activity in neural network <cit.>.External information is beneficial not only to evaluate the metrics but also to move toward a better understanding of what the metrics actually represent. For instance, the long-standing assumption that citation counts are proxies for scientific impact has been recently investigated in a social experiment <cit.>. Radicchi et al. <cit.> asked 2000 researchers to perform pairwise comparisons between papers, and then aggregated the results over the researchers, quantifying thereby the papers' perceived impact independently of their citation counts. The results of this experiment show that when authors are asked to judge their published papers, perceived impact and citation count correlate well, which indicates that the assumption that citation count represents paper impact is in agreement with experts' judgment. Using model data to evaluate the metrics One can also use models of network growth or network diffusion to evaluate the ability of the metrics to rank the nodes according to their importance in the model. This strategy opens different alternative possibilities.One can use models of network growth <cit.> where the nodes are endowed with some fitness parameter, grow synthetic networks of a given size and, a posteriori, evaluate the metrics' ability to uncover the nodes' latent fitness <cit.> (see also section <ref>). Another possibility is to use real data and test the metrics' ability to unearth the intrinsic spreading ability of the nodes according to some model of diffusion on the network. An extensive analysis has been carried out recently in the review article by Lu et al. <cit.> for static metrics and the classical SIR network diffusion model <cit.>. In particular, Lu et al. <cit.> use real data to compare different metrics with respect to their ability to reflect nodes' spreading power as determined by classical network diffusion models. Analogous studies are still missing for temporal networks.For example, it would be interesting to compare different (static and time-aware) metrics with respect to their ability to identify influential spreaders in suitable temporal-network spreading models <cit.> similarly to what has been already done for static metrics. The relative performance of the metrics in identifying of influential spreaders in temporal networks can provide us with important insights on which metrics can be consistently more useful than others in real-world applications. §.§ Systemic impact of ranking algorithms: avoiding information bias Besides metrics' performance, it is often of critical importance to detect the possible biases of the metrics and, wherever possible, suppress them. For example, in the context of quantitative analysis of academic data, the bias of paper- and researcher-level quantitative metrics by age and scientific field is long known <cit.>, and many methods have been proposed in the scientometrics literature to counteract it (see <cit.>, among others). In this review, we have mostly focused on the bias of ranking by node age (section <ref>) and possible methods to reduce or suppress it (section <ref>). On the other hand, other types of bias are present in large information systems.In particular, possible biases of ranking that come from social effects <cit.> are largely unexplored in the literature. Several shortcomings of automated ranking techniques can stem from social effects.For example, there is the risk that online users are only exposed to content similarto what they already know and that confirms their points of view, creating a “filter bubble” <cit.> of information. This is particularly dangerous for our society since online information is often not reliable – in a recent report, the WorldEconomic Forum has indicated massive digital misinformation as one of the main risks for our interconnected society [see <http://reports.weforum.org/global-risks-2013/risk-case-1/digital-wildfires-in-a-hyperconnected-world/>]. This issue may be exacerbated by the fact that users more prone to consume unreliablecontent tend to cluster together, creating groups of highly-polarized clusters referred to as “echo chambers” <cit.>. Designing statistically-grounded procedures to detect these biases and effective methodsto suppress them, thereby preserving information diversity and improving information quality, are vital topics for future research.The impact of centrality metrics and ranking on the behavior of agents in socio-economics systems deserves attention as well. Just to mention a few examples, online users are heavily influenced by recommender systems when choosing which items to purchase <cit.>, ranking can affect the decision of undecided electors in political elections <cit.>, the impact factor of scientific journals heavily affects scholars' research activity <cit.>. Careful considerations of possible future effects are thus needed before adopting a metric for some purpose. To properly assess the potential impact of ranking on systems' evolution, we would need an accurate model of the nodes' behavior. Homogeneous models of network growth assume that all nodes are driven by the same mechanism whenchoosing which node to connect to. Examples of homogeneous models include models based on the preferentialattachment mechanism <cit.>, models wherenodes tend to connect with the most central nodes and to delete their connections with the least centralnodes <cit.>, among others. While such homogeneous models can be used to generate benchmark networks to test the performance of network-based algorithms <cit.>,they neglect the multi-faceted nature of node behavior in real systems and they might provide us with an oversimplifieddescription of real systems' evolution.Models that account for heterogeneous node behavior have already beenproposed <cit.>,yet we are still at a preliminary stage in this research direction for network modeling and, for this reason, we still lack clear guidelines on how best to predict the impact of a given centrality metric on the properties of a given network. §.§ Summary and conclusionsRanking in complex networks plays a crucial role in many real-worlds applications – we have seen many examples of such applications throughout this review (see also the review by Lu et al. <cit.>). These applications cover problems in a broad range of research areas, including economics <cit.>, neuroscience <cit.> and social sciences <cit.>, among others. From a methodological point of view, we have mostly focused on methods inspired by statistical physics, and we have studied three broad classes of node-level ranking methods: static algorithms (section <ref>), time-dependent algorithms based on a time-aggregated network representation of the data (section <ref>), and algorithms based on a temporal-network representation of the data (section <ref>). We have also discussed examples of edge-level ranking algorithms and their application to the problem of link prediction in online systems (paragraph <ref>). In their recent review on community detection algorithms <cit.>, Fortunato and Hric write that “as long as there will be networks, there will be people looking for communities in them”. We might as well say that “as long as there will be networks, there will be people ranking their nodes”. The broad range of applications of network centrality metrics and the increasing availability of high-quality datasets suggest that research on ranking algorithms will not slow down in the forthcoming years. We expect scholars to become more and more sensitive to the problem of understanding and assessing the performance of the metrics for a given task. While we hope that the discussion provided in the previous two paragraphs will be useful as a guideline, our discussion does not pretend to be complete and several other dimensions can be added to the evaluation of the methods.We conclude by stressing the essential role of the temporal dimension in the design and evaluation of ranking algorithms for evolving networks.The main message of our work is that disregarding the temporal dimension can lead to sub-optimal or even misleading results. The number of methods presented in this review demonstrates that there are many ways to effectively include time in the computation of a ranking. We hope that this work will be useful for scholars from all research fields where networks and ranking are tools of primary importance.§.§ Bullet points and Hao's text Points, this section could be also divided into subsections:* We are left with many time-aware metrics. Differently from static metrics, extensive analysis of the metrics under different aspects is still lacking for time-aware/temporal-network metrics. It would be interesting to compare them with respect to their ability to identify influential spreaders in purely temporal spreading models, similarly as what Linyuan et al. did for static metrics and model. It would be also interesting to study whether they can better identify nodes judged from experts as influential, similarly as what we did for rescaled PageRank and papers. * Rescaled metrics, time-dependent algorithms or model-based ranking? It depends on the goal. If we want to have a balanced ranking, then rescaled metrics. If we want to predict future trend, probably model-based ranking or time-dependent algorithms with penalization for older nodes/interactions work better.* While we have explored in recent research how network growth mechanisms affect and possibly bias static metrics (though just indegree and PageRank), the same remains almost totally unexplored for time-dependent metrics. It is a much needed step to assess the reliability of centrality scores provided by temporal-network metrics.* The previous point is especially important since differently from standard physics, where experiments can rule out "unphysical" theories, we are dealing here with complex socio-economics systems. This means that probably no universal metric exist, each metric has a limited domain of usefulness. We probably need to check system by system which metric is best suited for our research goals, and refrain from introducing new metrics unless they provide clear new insights with respect to existing metrics (similarly to what Waltman suggested for bibliometric indices <cit.>).* The fact we are dealing with socio-economics systems also implies that in some important context, metrics can affect the behavior of agents. Prominent examples: impact factor in bibliometrics, economic complexity insights used for policy making by some countries (e.g. Sweden), recommender systems and search engines heavily affecting our online actions. Careful considerations of possible future effects are needed before adopting a metric for some purpose. Perhaps (agent-based?) models of the interplay between metrics and network growth will help, we are still at a preliminary stage in this direction (preferential attachment, recent work by Matus on recommender systems, network growth driven by ranking by Fortunato et al.). Maybe it is related to reinforcement learning.* @Matus, Hao and Ming-Yang: Feel free to add other discussion points here, to be developed soon into a cohesive text. * Previous static methods mainly did not differentiate ranking in history and in the future. In time-aware ranking, we can distinguish historical and future influential nodes or edges. Historical ones could be utilized to solve the cold start problem in link prediction or recommendation systems. While future ones help predict the trends of network evovling and improve the accuracy of prediction. Difference between historical and future ranking are still waiting for extensive investigation in time-aware ranking. * Recent years have seen that more and more dataset with timestamps are open to researchers. These datasets are of different kinds and fields, and have different time spans and time intervals, multiple edges, directed or undirected edges, and other different forms. As further more time-aware ranking methods emerge from these datasets, we are faced with an increasingly severe problems that how to merge different ranking methods. Some ranking methods benefit short range edges, while some methods are good at ranking long range edges. Traditional approaches mix different ranking by introducing a tunable parameter, such as s=α s_1+(1-α)s_2 or s=s_1^αs_2^1-α, where α is a tunable parameter to balance two different ranking score d_1 and s_2. However, given three or more ranking methods, it becomes impossible to merge them solely based on tuning parameters. Therefore, we need to investigate the characteristics of different ranking methods to better utilize them. A comment: note that we mention only marginally the other way around – the impact of ranking on network evolution. We could briefly refer to it as a largely unexplored problem in the Outlook section. Note that this is briefly discussed in Section 6.3 (in the context of the influence of recommender systems on the data) but we may still discuss this.We may call this part as Summary and perspective The past two decades have witnessed an increasing popularity of complex networks. Particularly, the complex network theory has been applied for understanding human behavior pattern and the social structure. Many efforts have been made by the physics community to understand the evolution of networks, especially their emergent structure. A large number of network models and applications have been proposed to enhance information sharing and filtering in the Internet era. Complex networks can be employed as a tool to solve a variety of ranking like problems ranging from ranking items with high quality to recommend high quality items to potential users. Within the field of ranking in complex networks, static metrics have been extensively analyzed under different aspects but lacking enough attention for time-aware/temporal-network metrics. Centrality metrics can also serve as a useful tool to network analysis and a building block for time dependent metrics. PageRank is of great success among those centrality metrics in Webpage ranking. However,with network evolving, nodes have a high score due to the time for which old node score accumulatively increased by static centrality. In this case, we can combine temporal factors in centrality metrics to improve the ranking fairness. Such hybrid temporal centralities exploit the advantage of choosing the early arriving nodes rather those uniformly distributed over the life time of the system, and rescale centrality framework such as rescaling PageRank method which are suitable for eliminating the time bias in ranking systems.It would be easier to set the time balance as a goal for an objective ranking system. The first mover in preferential attachment assumption provides us a new way to deucing time bias. It is, in principle, time-rescaled metrics for time balance yet powerful and feasible in social systems. We see that it works well for metrics with explicit penalization for older edges and/or nodes and the time-weighted degree canbe used for predicting trends. For a broader testing on different kind of scenario, penalizing method, temporal window and time-dependent teleportation metrics are available for obtaining a reasonably accurate time-balance results. However, for general time-balance model, no analytic results are available.Moreover, the memory effects and time-preserving path are highlighted in temporal networks. It is easier to build two methods on higher-order representations. The Markovian assumption provides us a new way to deucing time bias in the higher order network representations. Then combining Markov model and PageRank centrality become powerful to obtain better ranking model. In addition, temporal networks can be considered as a sequence of temporal snapshots in a specific time window. From this perspective, node centrality score in a temporal network representation differs significantly with node centrality score obtained in the time-aggregated representation. Coupling method between temporal network layers methods brings a solid analytic framework to identify not only the most central nodes in the network, but also the top-movers. In the sports scenario, assuming that the contribution to player score coming from a win against player at a specific time is proportional to the score of counterpart player at that time. Applying different player-level metrics to the network can found the win-lose score has a better predictive power. It is natural to extend the shortest path metrics to temporal networks. Research on temporal recommendation methods has indicated that input data in traditional recommender system are considered without time information. In reality, time played by time in recommendation can obtain better results. Specifically, it has been found that matrix factorization considering time effect could improve the accuracy of the recommendation considerably. Moreover, Considering temporal information,the network-based recommendation method not only obtains the improvement of accuracy but also tends to diversity favoring recommendation results. Hence, the time information can be play a more important role in network-based recommendation research. Besides recommendation result, the impact of recommendation ranking considering network growth is also very important to study. To understand potential recommender system impact on input data change, coupling a recommendation method with a simple agent-based model could be a solution in reality. Besides of above ranking theories and temporal ranking algorithms, recent empirical researches found an example of time information in academic network has some ideal results observed in identifying milestone scientific papers, also future success of scientific paper. It has been also found that success of researches is also predicted and scientific journal can be better ranked by using non-markovian PageRank. From economic development of countries also reviewed through linear regression and method of analogues, it is recognized so that time information of ranking in countries by GDP provide many advantages and insights into the fine detail of international trade network that the prediction can be realized. In addition, Online E-commerce system is full of time information to record online activities by users, which could significantly help to understand the dynamics of product popularity. Last but not the least, like all presented application benefits from temporal information, it is a advancement on link prediction that time-bias approaches revealing the popularity of links could improve the link prediction in evolving networks. The following discussion has been moved from Section 6 that might be used somewhere in the Discussion section. A potential issue that was also raised in Zurich is that the missing part has been influenced to some extent by the actual recommendation scores used by the website. So an algorithm may bee performing better than others because it better matches the webiste's algorithm rather than user's actual tastes, even though we must admit that the notion of user's taste is not well-defined and quiet volatile. Can we comment on this issue? To me, this is a rather philosophical issue. You simply want to reconstruct the hidden data and what all influences the hidden data is secondary. I am not yet fully convinced that it's just philosophical. But maybe only some controlled social experiment can give more precise indications – e.g. do the top-performing ML techniques outperform the recommendation ranking that is actually shown to the users? Which users blindly follow recommendation, and which do not? Our surprisal work gives some hint but there may be more to discover. But this goes definitely out of the scope of this review. As described above, previous static methods mainly have not differentiated ranking in history and will not not in the future. In the time-aware ranking, we can identify historical and future influential nodes or edges. Historical ones could be utilized to solve the cold start problem in link predictions and recommendation systems. While future ones help to predict the trends of network evolving and improve the accuracy of prediction. Differences between historical and future ranking are still waiting for extensive investigation in time-aware ranking. We have explored in recent research how network growth mechanisms affect and possibly bias static metrics and also explored time-dependent metrics. It is a much needed step to assess the reliability of centrality scores provided by temporal-network metrics. It would be also interesting to identify influential spreaders in purely temporal spreading models, similarly as what Linyuan et al.<cit.> did for static metrics and model. It is useful to study whether they can better identify nodes judged from experts as influential as rescaled PageRank in citation networks. In this review, differently from classic physics, where experiments dominate “unphysical” theories, we are dealing here with complex socio-economics systems. This means that probably no universal metric exist, each metric has a limited domain of usefulness. We probably need to check system by system which metric is best suited for our research goals, and refrain from introducing new metrics unless they provide clear new insights with respect to existing metrics (similarly to what Waltman suggested for bibliometric indices <cit.>). The fact we are dealing with socio-economics systems also implies that in some important context, metrics can affect the behavior of agents. Prominent examples: impact factor in bibliometrics, economic complexity insights used for policy making by some countries (e.g. Sweden), recommender systems and search engines heavily affecting our online actions. Careful considerations of possible future effects are needed before adopting a metric for some purpose. Models of the interplay between metrics and network growth will help, we are still at a preliminary stage in this direction [][][] (preferential attachment, recent work by Matus on recommender systems, network growth driven by ranking by Fortunato et al.).From a fundamental perspective, because many macroscopic effects ranging from pure physical such as turbulence, phase transition to social network such as collective behavior and chaotic system and so on are studied very intensively. Unfortunately, even if the complex network filed is known for the excellent modeling human behavior, but quality, growth and prediction are not well connected for this research community right now. We may undergo in crucial part of complex socio-economic systems still await for further exploration. § ACKNOWLEDGEMENTSWe wish to thank Alexander Vidmer for providing us with data and tools used for the analysis presented in paragraph 7.2, and for his early contribution to the text of that paragraph. We would also like to thank all those researchers with whom we have had inspiring and motivating discussions about the topics presented in thisreview, in particular: Giulio Cimini, Matthieu Cristelli, Alberto Hernando de Castro, Flavio Iannelli, Francois Lafond, Luciano Pietronero, Zhuo-Ming Ren, Andrea Tacchella, Claudio J. Tessone, Giacomo Vaccario, Zong-Wen Wei, An Zeng. HL and MYZ acknowledge financial support from the National Natural Science Foundation of China (Grant No. 11547040), Guangdong Province Natural Science Foundation (Grant No. 2016A030310051), Shenzhen Fundamental Research Foundation (Grant No. JCYJ20150625101524056, JCYJ20160520162743717,JCYJ20150529164656096), Project SZU R/D Fund (Grant No. 2016047), CCF-Tencent (Grant No. AGR20160201), Natural Science Foundation of SZU (Grant No. 2016-24). MSM and MM and YCZ acknowledge financial support from the EU FET-Open Grant No. 611272 (project Growthcom) and the Swiss National Science Foundation Grant No. 200020-143272. elsarticle-num 100 url<#>1urlprefixURL href#1#2#2 #1#1hanani2001information U. Hanani, B. Shapira, P. Shoval, Information filtering: Overview of issues, research and systems, User modeling and user-adapted interaction 11 (3) (2001) 203–259.baeza1999modern R. Baeza-Yates, B. Ribeiro-Neto, et al., Modern information retrieval, Vol. 463, ACM press New York, 1999.chen2004impact P.-Y. Chen, S.-y. Wu, J. Yoon, The impact of online recommendations and consumer feedback on sales, ICIS 2004 Proceedings (2004) 58.fleder2009blockbuster D. Fleder, K. Hosanagar, Blockbuster culture's next rise or fall: The impact of recommender systems on sales diversity, Management science 55 (5) (2009) 697–712.zeng2015modeling A. Zeng, C. H. Yeung, M. Medo, Y.-C. Zhang, Modeling mutual feedback between users and recommender systems, Journal of Statistical Mechanics: Theory and Experiment 2015 (7) (2015) P07020.feenberg2016s D. Feenberg, I. Ganguli, P. Gaule, J. Gruber, It’s good to be first: Order bias in reading and citing nber working papers, Review of Economics and Statistics 99 (2016) 32–39.cho2004impact J. Cho, S. Roy, Impact of search engines on page popularity, in: Proceedings of the 13th International Conference on World Wide Web, ACM, 2004, pp. 20–29.fortunato2006topical S. Fortunato, A. Flammini, F. Menczer, A. Vespignani, Topical interests and the mitigation of search engine bias, Proceedings of the National Academy of Sciences 103 (34) (2006) 12684–12689.pan2007google B. Pan, H. Hembrooke, T. Joachims, L. Lorigo, G. Gay, L. Granka, In google we trust: Users' decisions on rank, position, and relevance, Journal of Computer-Mediated Communication 12 (3) (2007) 801–823.murphy2006primacy J. Murphy, C. Hofacker, R. Mizerski, Primacy and recency effects on clicking behavior, Journal of Computer-Mediated Communication 11 (2) (2006) 522–535.zhou2010impact R. Zhou, S. Khemmarat, L. Gao, The impact of YouTube recommendation system on video views, in: Proceedings of the 10th ACM SIGCOMM Conference on Internet measurement, ACM, 2010, pp. 404–410.hirsch2005index J. E. Hirsch, An index to quantify an individual's scientific research output, Proceedings of the National Academy of Sciences 102 (46) (2005) 16569–16572.radicchi2009diffusion F. Radicchi, S. Fortunato, B. Markines, A. Vespignani, Diffusion of scientific credits and the ranking of scientists, Physical Review E 80 (5) (2009) 056103.radicchi2011best F. Radicchi, Who is the best player ever? a complex network analysis of the history of professional tennis, PLoS ONE 6 (2) (2011) e17249.spitz2014measuring A. Spitz, E.-Á. Horvát, Measuring long-term impact based on network centrality: Unraveling cinematic citations, PLoS ONE 9 (10) (2014) e108857.wasserman2015cross M. Wasserman, X. H. T. Zeng, L. A. N. Amaral, Cross-evaluation of metrics to estimate the significance of creative works, Proceedings of the National Academy of Sciences 112 (5) (2015) 1281–1286.waltman2016review L. Waltman, A review of the literature on citation impact indicators, Journal of Informetrics 10 (2) (2016) 365–391.wilsdon2016metric J. Wilsdon, The Metric Tide: Independent Review of the Role of Metrics in Research Assessment and Management, SAGE, 2016.brockmann2013hidden D. Brockmann, D. Helbing, The hidden geometry of complex, network-driven contagion phenomena, Science 342 (6164) (2013) 1337–1342.iannelli2016effective F. Iannelli, A. Koher, D. Brockmann, P. Hoevel, I. M. Sokolov, Effective distances for epidemics spreading on complex networks, arXiv preprint arXiv:1608.06201.wang2016statistical Z. Wang, C. T. Bauch, S. Bhattacharyya, A. d’Onofrio, P. Manfredi, M. Perc, N. Perra, M. Salathé, D. Zhao, Statistical physics of vaccination, Physics Reports 664 (2016) 1–113.epstein2015search R. Epstein, R. E. Robertson, The search engine manipulation effect and its possible impact on the outcomes of elections, Proceedings of the National Academy of Sciences 112 (33) (2015) E4512–E4521.boccaletti2006complex S. Boccaletti, V. Latora, Y. Moreno, M. Chavez, D.-U. Hwang, Complex networks: Structure and dynamics, Physics Reports 424 (4) (2006) 175–308.newman2010networks M. Newman, Networks: An Introduction, Oxford University Press, 2010.jackson2010social M. O. Jackson, Social and Economic Networks, Princeton University Press, 2010.barabasi2016network A.-L. Barabási, Network Science, Cambridge University Press, 2016.balcan2009seasonal D. Balcan, H. Hu, B. Goncalves, P. Bajardi, C. Poletto, J. J. Ramasco, D. Paolotti, N. Perra, M. Tizzoni, W. Van den Broeck, et al., Seasonal transmission potential and activity peaks of the new influenza a (h1n1): a monte carlo likelihood analysis based on human mobility, BMC Medicine 7 (1) (2009) 45.hidalgo2009building C. A. Hidalgo, R. Hausmann, The building blocks of economic complexity, Proceedings of the National Academy of Sciences 106 (26) (2009) 10570–10575.tacchella2012new A. Tacchella, M. Cristelli, G. Caldarelli, A. Gabrielli, L. Pietronero, A new metrics for countries' fitness and products' complexity, Scientific Reports 2 (2013) 723.cristelli2015heterogeneous M. Cristelli, A. Tacchella, L. Pietronero, The heterogeneous dynamics of economic complexity, PLoS ONE 10 (2) (2015) e0117174.battiston2012debtrank S. Battiston, M. Puliga, R. Kaushik, P. Tasca, G. Caldarelli, Debtrank: Too central to fail? financial networks, the fed and systemic risk, Scientific Reports 2 (2012) 541.kenett2012evolvement D. Y. Kenett, M. Raddant, T. Lux, E. Ben-Jacob, Evolvement of uniformity and volatility in the stressed global financial village, PONE 7 (2) (2012) e31144.battiston2016complexity S. Battiston, J. D. Farmer, A. Flache, D. Garlaschelli, A. G. Haldane, H. Heesterbeek, C. Hommes, C. Jaeger, R. May, M. Scheffer, Complexity theory and financial regulation, Science 351 (6275) (2016) 818–819.schneider2011mitigation C. M. Schneider, A. A. Moreira, J. S. Andrade, S. Havlin, H. J. Herrmann, Mitigation of malicious attacks on networks, PNAS 108 (10) (2011) 3838–3841.reis2014avoiding S. D. Reis, Y. Hu, A. Babino, J. S. Andrade Jr, S. Canals, M. Sigman, H. A. Makse, Avoiding catastrophic failure in correlated networks of networks, Nature Physics 10 (10) (2014) 762–767.duhan2009page N. Duhan, A. Sharma, K. K. Bhatia, Page ranking algorithms: A survey, in: Advance Computing Conference, 2009. IACC 2009. IEEE International, IEEE, 2009, pp. 1530–1537.medo2013network M. Medo, Network-based information filtering algorithms: Ranking and recommendation, in: Dynamics On and Of Complex Networks, Volume 2, Springer, 2013, pp. 315–334.katz1953new L. Katz, A new status index derived from sociometric analysis, Psychometrika 18 (1) (1953) 39–43.bonacich1987power P. Bonacich, Power and centrality: A family of measures, American Journal of Sociology (1987) 1170–1182.borgatti1995centrality S. P. Borgatti, Centrality and aids, Connections 18 (1)112–114.brin1998anatomy S. Brin, L. Page, The anatomy of a large-scale hypertextual Web search engine, Computer Networks and ISDN Systems 30 (1) (1998) 107–117.rosvall2014memory M. Rosvall, A. V. Esquivel, A. Lancichinetti, J. D. West, R. Lambiotte, Memory in network flows and its effects on spreading dynamics and community detection, Nature Communications 5 (2014) 4630.rocha2014random L. E. Rocha, N. Masuda, Random walk centrality for temporal networks, New Journal of Physics 16 (6) (2014) 063023.masuda2016random N. Masuda, M. A. Porter, R. Lambiotte, Random walks and diffusion on networks, arXiv preprint arXiv:1612.03281.zhang2016dynamics Z.-K. Zhang, C. Liu, X.-X. Zhan, X. Lu, C.-X. Zhang, Y.-C. Zhang, Dynamics of information diffusion and its applications on complex networks, Physics Reports 651 (2016) 1–34.piraveenan2013percolation M. Piraveenan, M. Prokopenko, L. Hossain, Percolation centrality: Quantifying graph-theoretic impact of nodes during percolation in networks, PLoS ONE 8 (1) (2013) e53095.morone2015influence F. Morone, H. Makse, Influence maximization in complex networks through optimal percolation, Nature 524 (7563) (2015) 65–68.radicchi2016leveraging F. Radicchi, C. Castellano, Leveraging percolation theory to single out influential spreaders in networks, arXiv preprint arXiv:1605.07041.langville2011google A. N. Langville, C. D. Meyer, Google's PageRank and Beyond: The Science of Search Engine Rankings, Princeton University Press, 2006.zhang2010personalized Z.-K. Zhang, T. Zhou, Y.-C. Zhang, Personalized recommendation via integrated diffusion on user–item–tag tripartite graphs, Physica A 389 (1) (2010) 179–186.zhang2007heat Y.-C. Zhang, M. Blattner, Y.-K. Yu, Heat conduction process on community networks as a recommendation model, Physical Review Letters 99 (15) (2007) 154301.zhou2010solving T. Zhou, Z. Kuscsik, J.-G. Liu, M. Medo, J. R. Wakeling, Y.-C. Zhang, Solving the apparent diversity-accuracy dilemma of recommender systems, Proceedings of the National Academy of Sciences 107 (10) (2010) 4511–4515.chen2007finding P. Chen, H. Xie, S. Maslov, S. Redner, Finding scientific gems with Google's PageRank algorithm, Journal of Informetrics 1 (1) (2007) 8–15.ding2009pagerank Y. Ding, E. Yan, A. Frazho, J. Caverlee, PageRank for ranking authors in co-citation networks, Journal of the American Society for Information Science and Technology 60 (11) (2009) 2229–2243.ren2014iterative Z.-M. Ren, A. Zeng, D.-B. Chen, H. Liao, J.-G. Liu, Iterative resource allocation for ranking spreaders in complex networks, EPL (Europhysics Letters) 106 (4) (2014) 48005.pei2014searching S. Pei, L. Muchnik, J. S. Andrade Jr, Z. Zheng, H. A. Makse, Searching for superspreaders of information in real-world social media, Scientific Reports 4 (2014) 5547.franceschet2011pagerank M. Franceschet, PageRank: Standing on the shoulders of giants, Communications of the ACM 54 (6) (2011) 92–101.gleich2015pagerank D. F. Gleich, PageRank beyond the web, SIAM Review 57 (3) (2015) 321–363.ermann2015google L. Ermann, K. M. Frahm, D. L. Shepelyansky, Google matrix analysis of directed networks, Reviews of Modern Physics 87 (4) (2015) 1261.jiang2008self B. Jiang, S. Zhao, J. Yin, Self-organized natural roads for predicting traffic flow: A sensitivity study, Journal of Statistical Mechanics: Theory and Experiment 2008 (07) (2008) P07008.mao2015quantifying H. Mao, X. Shuai, Y.-Y. Ahn, J. Bollen, Quantifying socio-economic indicators in developing countries from mobile phone communication data: Applications to Côte d'Ivoire, EPJ Data Science 4 (1) (2015) 1–16.walker2007ranking D. Walker, H. Xie, K.-K. Yan, S. Maslov, Ranking scientific publications using a model of network traffic, Journal of Statistical Mechanics: Theory and Experiment 2007 (06) (2007) P06010.bollen2006journal J. Bollen, M. A. Rodriquez, H. Van de Sompel, Journal status, Scientometrics 69 (3) (2006) 669–687.jing2008visualrank Y. Jing, S. Baluja, VisualRank: Applying PageRank to large-scale image search, IEEE Transactions on Pattern Analysis and Machine Intelligence 30 (11) (2008) 1877–1890.ivan2011web G. Iván, V. Grolmusz, When the web meets the cell: using personalized PageRank for analyzing protein interaction networks, Bioinformatics 27 (3) (2011) 405–407.lu2012recommender L. Lü, M. Medo, C. H. Yeung, Y.-C. Zhang, Z.-K. Zhang, T. Zhou, Recommender systems, Physics Reports 519 (1) (2012) 1–49.barabasi1999emergence A.-L. Barabási, R. Albert, Emergence of scaling in random networks, Science 286 (5439) (1999) 509–512.bianconi2001competition G. Bianconi, A.-L. Barabási, Competition and multiscaling in evolving networks, EPL (Europhysics Letters) 54 (4) (2001) 436.dorogovtsev2002evolution S. N. Dorogovtsev, J. F. Mendes, Evolution of networks, Advances in Physics 51 (4) (2002) 1079–1187.albert2002statistical R. Albert, A.-L. Barabási, Statistical mechanics of complex networks, Reviews of Modern Physics 74 (1) (2002) 47.medo2011temporal M. Medo, G. Cimini, S. Gualdi, Temporal effects in the growth of networks, Physical Review Letters 107 (23) (2011) 238701.papadopoulos2012popularity F. Papadopoulos, M. Kitsak, M. Á. Serrano, M. Boguná, D. Krioukov, Popularity versus similarity in growing networks, Nature 489 (7417) (2012) 537–540.wang2013quantifying D. Wang, C. Song, A.-L. Barabási, Quantifying long-term scientific impact, Science 342 (6154) (2013) 127–132.newman2009first M. Newman, The first-mover advantage in scientific publication, EPL (Europhysics Letters) 86 (6) (2009) 68001.perra2012random N. Perra, A. Baronchelli, D. Mocanu, B. Gonçalves, R. Pastor-Satorras, A. Vespignani, Random walks and search in time-varying networks, Physical Review Letters 109 (23) (2012) 238701.scholtes2014causality I. Scholtes, N. Wider, R. Pfitzner, A. Garas, C. J. Tessone, F. Schweitzer, Causality-driven slow-down and speed-up of diffusion in non-markovian temporal networks, Nature Communications 5.lambiotte2015effect R. Lambiotte, V. Salnikov, M. Rosvall, Effect of memory on the dynamics of random walks on networks, Journal of Complex Networks 3 (2) (2015) 177–188.delvenne2015diffusion J.-C. Delvenne, R. Lambiotte, L. E. Rocha, Diffusion on networked systems is a question of time or structure, Nature Communications 6.mariani2015ranking M. S. Mariani, M. Medo, Y.-C. Zhang, Ranking nodes in growing networks: When PageRank fails, Scientific Reports 5 (2015) 16181.vidmer2016role A. Vidmer, M. Medo, The essential role of time in network-based recommendation, EPL (Europhysics Letters) 116 (2016) 30007.freeman1977set L. C. Freeman, A set of measures of centrality based on betweenness, Sociometry (1977) 35–41.caldarelli2007scale G. Caldarelli, Scale-free networks: Complex webs in nature and technology, Oxford University Press, 2007.newman2003structure M. E. J. Newman, The structure and function of complex networks, SIAM Review 45 (2003) 167.fortunato2006approximating S. Fortunato, M. Boguñá, A. Flammini, F. Menczer, Approximating PageRank from in-degree, in: International Workshop on Algorithms and Models for the Web-Graph, Springer, 2006, pp. 59–71.pastor2016topological R. Pastor-Satorras, C. Castellano, Topological structure of the h-index in complex networks, arXiv preprint arXiv:1610.00569.klemm2012measure K. Klemm, M. Á. Serrano, V. M. Eguíluz, M. San Miguel, A measure of individual role in collective dynamics, Scientific Reports 2 (292).lu2016vital L. Lü, D. Chen, X.-L. Ren, Q.-M. Zhang, Y.-C. Zhang, T. Zhou, Vital nodes identification in complex networks, Physics Reports 650 (2016) 1–63.kunegis2009slashdot J. Kunegis, A. Lommatzsch, C. Bauckhage, The slashdot zoo: mining a social network with negative edges, in: Proceedings of the 18th International Conference on World Wide Web, ACM, 2009, pp. 741–750.bornmann2008citation L. Bornmann, H.-D. Daniel, What do citation counts measure? a review of studies on citing behavior, Journal of Documentation 64 (1) (2008) 45–80.catalini2015incidence C. Catalini, N. Lacetera, A. Oettl, The incidence and role of negative citations in science, PNAS 112 (45) (2015) 13823–13826.lu2016h L. Lü, T. Zhou, Q.-M. Zhang, H. E. Stanley, The H-index of a network node and its relation to degree and coreness, Nature Communications 7 (2016) 10168.chen2012identifying D. Chen, L. Lü, M.-S. Shang, Y.-C. Zhang, T. Zhou, Identifying influential nodes in complex networks, Physica A 391 (4) (2012) 1777–1787.chen2004local Y.-Y. Chen, Q. Gan, T. Suel, Local methods for estimating PageRank values, in: Proceedings of the Thirteenth ACM International Conference on Information and Knowledge Management, ACM, 2004, pp. 381–389.sabidussi1966centrality G. Sabidussi, The centrality index of a graph, Psychometrika 31 (4) (1966) 581–603.rochat2009closeness Y. Rochat, Closeness centrality extended to unconnected graphs: The harmonic centrality index, in: ASNA, no. EPFL-CONF-200525, 2009.boldi2014axioms P. Boldi, S. Vigna, Axioms for centrality, Internet Mathematics 10 (3-4) (2014) 222–262.borgatti2005centrality S. P. Borgatti, Centrality and network flow, Social Networks 27 (1) (2005) 55–71.newman2005measure M. E. Newman, A measure of betweenness centrality based on random walks, Social Networks 27 (1) (2005) 39–54.NP6888 M. Kitsak, L. K. Gallos, S. Havlin, F. Liljeros, L. Muchnik, H. E. Stanley, H. A. Makse, Identification of influential spreaders in complex networks, Nature Physics 6 (11) (2010) 888–893.Carmi200701 S. Carmi, S. Havlin, S. Kirkpatrick, Y. Shavitt, E. Shir, A model of internet topology using k-shell decomposition, Proceedings of the National Academy of Sciences 104 (27) (2007) 11150–11154.garas2012k A. Garas, F. Schweitzer, S. Havlin, A k-shell decomposition method for weighted networks, New Journal of Physics 14 (8) (2012) 083030.kawamoto2016localized T. Kawamoto, Localized eigenvectors of the non-backtracking matrix, Journal of Statistical Mechanics: Theory and Experiment 2016 (2) (2016) 023404.perron1907theorie O. Perron, Zur theorie der matrices, Mathematische Annalen 64 (2) (1907) 248–263.hubbell1965input C. H. Hubbell, An input-output approach to clique identification, Sociometry (1965) 377–399.bonacich2001eigenvector P. Bonacich, P. Lloyd, Eigenvector-like measures of centrality for asymmetric relations, Social Networks 23 (3) (2001) 191–201.fletcher2016structure J. M. Fletcher, T. Wennekers, From structure to activity: Using centrality measures to predict neuronal activity, International Journal of Neural Systems 0 (16) (2016) 1750013.park2005network J. Park, M. E. Newman, A network-based ranking system for us college football, Journal of Statistical Mechanics: Theory and Experiment 2005 (10) (2005) P10014.lambiotte2012ranking R. Lambiotte, M. Rosvall, Ranking and clustering of nodes in networks with smart teleportation, Physical Review E 85 (5) (2012) 056107.berkhin2005survey P. Berkhin, A survey on PageRank computing, Internet Mathematics 2 (1) (2005) 73–120.perra2008spectral N. Perra, S. Fortunato, Spectral centrality measures in complex networks, Physical Review E 78 (3) (2008) 036107.boldi2005pagerank P. Boldi, M. Santini, S. Vigna, PageRank as a function of the damping factor, in: Proceedings of the 14th International Conference on World Wide Web, ACM, 2005, pp. 557–566.bianchini2005inside M. Bianchini, M. Gori, F. Scarselli, Inside pagerank, ACM Transactions on Internet Technology 5 (1) (2005) 92–128.avrachenkov2008singular K. Avrachenkov, N. Litvak, K. S. Pham, A singular perturbation approach for choosing the PageRank damping factor, Internet Mathematics 5 (1-2) (2008) 47–69.fogaras2003start D. Fogaras, Where to start browsing the web?, in: International Workshop on Innovative Internet Community Systems, Springer, 2003, pp. 65–79.zhirov2010two A. Zhirov, O. Zhirov, D. Shepelyansky, Two-dimensional ranking of Wikipedia articles, The European Physical Journal B 77 (4) (2010) 523–531.lu2011leaders L. Lü, Y.-C. Zhang, C. H. Yeung, T. Zhou, Leaders in social networks, the Delicious case, PLoS ONE 6 (6) (2011) e21202.li2014identifying Q. Li, T. Zhou, L. Lü, D. Chen, Identifying influential spreaders by weighted LeaderRank, Physica A 404 (2014) 47–55.zhou2013power Y. Zhou, L. Lü, W. Liu, J. Zhang, The power of ground user in recommender systems, PLoS ONE 8 (8) (2013) e70094.kleinberg1999authoritative J. M. Kleinberg, Authoritative sources in a hyperlinked environment, Journal of the ACM 46 (5) (1999) 604–632.li2006citeseer H. Li, I. G. Councill, L. Bolelli, D. Zhou, Y. Song, W.-C. Lee, A. Sivasubramaniam, C. L. Giles, CiteSeer χ: a scalable autonomous scientific digital library, in: Proceedings of the 1st International Conference on Scalable Information Systems, ACM, 2006, p. 18.ng2001stable A. Y. Ng, A. X. Zheng, M. I. Jordan, Stable algorithms for link analysis, in: Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, ACM, 2001, pp. 258–266.deng2009generalized H. Deng, M. R. Lyu, I. King, A generalized Co-HITS algorithm and its application to bipartite graphs, in: Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, 2009, pp. 239–248.Zachary1977 W. W. Zachary, http://www.jstor.org/stable/3629752An information flow model for conflict and fission in small groups, Journal of Anthropological Research 33 (4) (1977) pp. 452–473. <http://www.jstor.org/stable/3629752>fortunato2010community S. Fortunato, Community detection in graphs, Physics Reports 486 (3) (2010) 75–174.fortunato2016community S. Fortunato, D. Hric, Community detection in networks: A user guide, Physics Reports 659 (2016) 1–44.vidmer2015unbiased A. Vidmer, M. Medo, Y.-C. Zhang, Unbiased metrics of friends' influence in multi-level networks, EPJ Data Science 4 (1) (2015) 1–13.sarigol2014predicting E. Sarigöl, R. Pfitzner, I. Scholtes, A. Garas, F. Schweitzer, Predicting scientific success based on coauthorship networks, EPJ Data Science 3 (1) (2014) 1.kivela2014multilayer M. Kivelä, A. Arenas, M. Barthelemy, J. P. Gleeson, Y. Moreno, M. A. Porter, Multilayer networks, Journal of Complex Networks 2 (3) (2014) 203–271.boccaletti2014structure S. Boccaletti, G. Bianconi, R. Criado, C. I. Del Genio, J. Gómez-Gardeñes, M. Romance, I. Sendiña-Nadal, Z. Wang, M. Zanin, The structure and dynamics of multilayer networks, Physics Reports 544 (1) (2014) 1–122.sole2014centrality A. Solé-Ribalta, M. De Domenico, S. Gómez, A. Arenas, Centrality rankings in multiplex networks, in: Proceedings of the 2014 ACM conference on Web Science, ACM, 2014, pp. 149–155.de2015ranking M. De Domenico, A. Solé-Ribalta, E. Omodei, S. Gómez, A. Arenas, Ranking in interconnected multilayer networks reveals versatile nodes, Nature Communications 6 (2015) 6868.taylor2015eigenvector D. Taylor, S. A. Myers, A. Clauset, M. A. Porter, P. J. Mucha, Eigenvector-based centrality measures for temporal networks, arXiv preprint arXiv:1507.01266.caldarelli2012network G. Caldarelli, M. Cristelli, A. Gabrielli, L. Pietronero, A. Scala, A. Tacchella, A network analysis of countries' export flows: Firm grounds for the building blocks of the economy, PLoS ONE 7 (10) (2012) e47278.cristelli2013measuring M. Cristelli, A. Gabrielli, A. Tacchella, G. Caldarelli, L. Pietronero, Measuring the intangibles: A metrics for the economic complexity of countries and products, PLoS ONE 8 (8) (2013) e70726.hausmann2014atlas R. Hausmann, C. A. Hidalgo, S. Bustos, M. Coscia, A. Simoes, M. A. Yildirim, The Alas of Economic Complexity: Mapping Paths to Prosperity, MIT Press, 2014.mariani2015measuring M. S. Mariani, A. Vidmer, M. Medo, Y.-C. Zhang, Measuring economic complexity of countries and products: Which metric to use?, The European Physical Journal B 88 (11) (2015) 1–9.wu2016mathematics R.-J. Wu, G.-Y. Shi, Y.-C. Zhang, M. S. Mariani, The mathematics of non-linear metrics for nested networks, Physica A 460 (2016) 254–269.stojkoski2016impact V. Stojkoski, Z. Utkovski, L. Kocarev, The impact of services on economic complexity: Service sophistication as route for economic growth, arXiv preprint arXiv:1604.06284.dominguez2015ranking V. Domínguez-García, M. A. Muñoz, Ranking species in mutualistic networks, Scientific Reports 5 (2015) 8182.resnick2000reputation P. Resnick, K. Kuwabara, R. Zeckhauser, E. Friedman, Reputation systems, Communications of the ACM 43 (12) (2000) 45–48.josang2007survey A. Jøsang, R. Ismail, C. Boyd, A survey of trust and reputation systems for online service provision, Decision support systems 43 (2) (2007) 618–644.pinyol2013computational I. Pinyol, J. Sabater-Mir, Computational trust and reputation models for open multi-agent systems: a review, Artificial Intelligence Review 40 (1) (2013) 1–25.gregg2006role D. G. Gregg, J. E. Scott, The role of reputation systems in reducing on-line auction fraud, International Journal of Electronic Commerce 10 (3) (2006) 95–120.mcdonald2002reputation C. G. McDonald, V. C. Slawson, Reputation in an Internet auction market, Economic inquiry 40 (4) (2002) 633–650.IEEEDM12422011 G. Wang, S. Xie, B. Liu, S. Y. Philip, Review graph based online store review spammer detection, in: IEEE 11th International Conference on Data Mining, IEEE, 2011, pp. 1242–1247.ACM6202009 F. Benevenuto, T. Rodrigues, V. Almeida, J. Almeida, M. Gonçalves, Detecting spammers and content promoters in online video social networks, in: Proceedings of the 32nd International ACM SIGIR Conference on Research and Development in Information Retrieval, ACM, 2009, pp. 620–627.masum2004manifesto H. Masum, Y.-C. Zhang, Manifesto for the reputation society, First Monday 9 (7).laureti2006information P. Laureti, L. Moret, Y.-C. Zhang, Y.-K. Yu, Information filtering via iterative refinement, EPL (Europhysics Letters) 75 (6) (2006) 1006.yu2006decoding Y.-K. Yu, Y.-C. Zhang, P. Laureti, L. Moret, Decoding information from noisy, redundant, and intentionally distorted sources, Physica A 371 (2) (2006) 732–744.medo2010effect M. Medo, J. R. Wakeling, The effect of discrete vs. continuous-valued ratings on reputation and ranking systems, EPL (Europhysics Letters) 91 (4) (2010) 48004.Yanbo Y. B. Zhou, T. Lei, T. Zhou, A robust ranking algorithm to spamming, EPL (Europhysics Letters) 94 (4) (2011) 48002.liao2014ranking H. Liao, A. Zeng, R. Xiao, Z.-M. Ren, D.-B. Chen, Y.-C. Zhang, Ranking reputation and quality in online rating systems, PLoS ONE 9 (5) (2014) e97146.jeong2003measuring H. Jeong, Z. Néda, A.-L. Barabási, Measuring preferential attachment in evolving networks, EPL (Europhysics Letters) 61 (4) (2003) 567.redner2005citation S. Redner, Citation statistics from 110 years of Physical Review, Physics Today 58 (2005) 49.krapivsky2001organization P. L. Krapivsky, S. Redner, Organization of growing random networks, Physical Review E 63 (6) (2001) 066123.berset2013effect Y. Berset, M. Medo, The effect of the initial network configuration on preferential attachment, The European Physical Journal B 86 (arXiv: 1305.0205) (2013) 260.fotouhi2013network B. Fotouhi, M. G. Rabbat, Network growth with arbitrary initial conditions: Degree dynamics for uniform and preferential attachment, Physical Review E 88 (6) (2013) 062801.caldarelli2002scale G. Caldarelli, A. Capocci, P. De Los Rios, M. A. Munoz, Scale-free networks from varying vertex intrinsic fitness, Physical Review Letters 89 (25) (2002) 258702.amaral2000classes L. A. N. Amaral, A. Scala, M. Barthelemy, H. E. Stanley, Classes of small-world networks, Proceedings of the National Academy of Sciences 97 (21) (2000) 11149–11152.adamic2000power L. A. Adamic, B. A. Huberman, Power-law distribution of the world wide web, Science 287 (5461) (2000) 2115–2115.hajra2005aging K. B. Hajra, P. Sen, Aging in citation networks, Physica A 346 (1) (2005) 44–48.newman2014prediction M. Newman, Prediction of highly cited papers, EPL (Europhysics Letters) 105 (2) (2014) 28002.baeza2004web R. Baeza-Yates, C. Castillo, F. Saint-Jean, Web dynamics, structure, and page quality, in: Web Dynamics, Springer, 2004, pp. 93–109.maslov2008promise S. Maslov, S. Redner, Promise and pitfalls of extending Google's PageRank algorithm to citation networks, The Journal of Neuroscience 28 (44) (2008) 11103–11105.mariani2016identification M. S. Mariani, M. Medo, Y.-C. Zhang, Identification of milestone papers through time-balanced network centrality, Journal of Informetrics 10 (4) (2016) 1207–1223.parolo2015attention P. D. B. Parolo, R. K. Pan, R. Ghosh, B. A. Huberman, K. Kaski, S. Fortunato, Attention decay in science, Journal of Informetrics 9 (4) (2015) 734–745.marsden1993network P. V. Marsden, N. E. Friedkin, Network studies of social influence, Sociological Methods & Research 22 (1) (1993) 127–151.kempe2003maximizing D. Kempe, J. Kleinberg, É. Tardos, Maximizing the spread of influence through a social network, in: Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, ACM, 2003, pp. 137–146.tang2009social J. Tang, J. Sun, C. Wang, Z. Yang, Social influence analysis in large-scale networks, in: Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, ACM, 2009, pp. 807–816.cha2010measuring M. Cha, H. Haddadi, F. Benevenuto, P. K. Gummadi, Measuring user influence in Twitter: The million follower fallacy, ICWSM 10 (10-17) (2010) 30.leskovec2007dynamics J. Leskovec, L. A. Adamic, B. A. Huberman, The dynamics of viral marketing, ACM Transactions on the Web 1 (1) (2007) 5.cha2009measurement M. Cha, A. Mislove, K. P. Gummadi, A measurement-driven analysis of information propagation in the Flickr social network, in: Proceedings of the 18th international conference on World wide web, ACM, 2009, pp. 721–730.steeg2011stops G. V. Steeg, R. Ghosh, K. Lerman, What stops social epidemics?, arXiv preprint arXiv:1102.1985.radicchi2008universality F. Radicchi, S. Fortunato, C. Castellano, Universality of citation distributions: Toward an objective measure of scientific impact, Proceedings of the National Academy of Sciences 105 (45) (2008) 17268–17272.radicchi2011rescaling F. Radicchi, C. Castellano, Rescaling citations of publications in physics, Physical Review E 83 (4) (2011) 046116.albarran2011skewness P. Albarrán, J. A. Crespo, I. Ortuño, J. Ruiz-Castillo, The skewness of science in 219 sub-fields and a number of aggregates, Scientometrics 88 (2) (2011) 385–397.waltman2012universality L. Waltman, N. J. van Eck, A. F. van Raan, Universality of citation distributions revisited, Journal of the American Society for Information Science and Technology 63 (1) (2012) 72–77.vaccario2017quantifying G. Vaccario, M. Medo, N. Wider, M. S. Mariani, Quantifying and suppressing ranking bias in a large citation network, arXiv:1703.08071.radicchi2012reverse F. Radicchi, C. Castellano, A reverse engineering approach to the suppression of citation biases reveals universal properties of citation distributions, PLoS ONE 7 (3) (2012) e33833.zeng2013trend A. Zeng, S. Gualdi, M. Medo, Y.-C. Zhang, Trend prediction in temporal bipartite networks: The case of Movielens, Netflix, and Digg, Advances in Complex Systems 16 (04n05) (2013) 1350024.koren2010collaborative Y. Koren, Collaborative filtering with temporal dynamics, Communications of the ACM 53 (4) (2010) 89–97.zhou2015temporal Y. Zhou, A. Zeng, W.-H. Wang, Temporal effects in trend prediction: Identifying the most popular nodes in the future, PLoS ONE 10 (3) (2015) e0120735.ghosh2011time R. Ghosh, T.-T. Kuo, C.-N. Hsu, S.-D. Lin, K. Lerman, Time-aware ranking in dynamic citation networks, in: IEEE 11th International Conference on Data Mining Workshops (ICDMW), IEEE, 2011, pp. 373–380.yu2005adding P. S. Yu, X. Li, B. Liu, Adding the temporal dimension to search a case study in publication search, in: Proceedings of the 2005 IEEE/WIC/ACM international conference on web intelligence, IEEE Computer Society, 2005, pp. 543–549.berberich2004t K. Berberich, M. Vazirgiannis, G. Weikum, T-rank: Time-aware authority ranking, in: Algorithms and Models for the Web-Graph, Springer, 2004, pp. 131–142.berberich2005time K. Berberich, M. Vazirgiannis, G. Weikum, Time-aware authority ranking, Internet Mathematics 2 (3) (2005) 301–332.sabater2001regret J. Sabater, C. Sierra, Regret: A reputation model for gregarious societies, in: Fourth Workshop on Deception Fraud and Trust in Agent Societies, Vol. 70, 2001, pp. 61–69.jsang2002beta A. Jø sang, R. Ismail, The beta reputation system, in: Proceedings of the 15th Bled Electronic Commerce Conference, Vol. 5, 2002, pp. 2502–2511.liu2010anomaly Y. Liu, Y. Sun, Anomaly detection in feedback-based reputation systems through temporal and correlation analysis, in: 2010 IEEE Second International Conference on Social Computing, IEEE, 2010, pp. 65–72.kong2008experience J. S. Kong, N. Sarshar, V. P. Roychowdhury, Experience versus talent shapes the structure of the Web, Proceedings of the National Academy of Sciences 105 (37) (2008) 13724–13729.ren2016characterizing Z.-M. Ren, Y.-Q. Shi, H. Liao, Characterizing popularity dynamics of online videos, Physica A 453 (2016) 236–241.dorogovtsev2000evolution S. N. Dorogovtsev, J. F. F. Mendes, Evolution of networks with aging of sites, Physical Review E 62 (2) (2000) 1842.medo2014statistical M. Medo, Statistical validation of high-dimensional models of growing networks, Physical Review E 89 (3) (2014) 032801.berberich2006buzzrank K. Berberich, S. Bedathur, M. Vazirgiannis, G. Weikum, BuzzRank... and the trend is your friend, in: Proceedings of the 15th International Conference on World Wide Web, ACM, 2006, pp. 937–938.holme2012temporal P. Holme, J. Saramäki, Temporal networks, Physics Reports 519 (3) (2012) 97–125.kempe2000connectivity D. Kempe, J. Kleinberg, A. Kumar, Connectivity and inference problems for temporal networks, in: Proceedings of the 32nd Annual ACM Symposium on Theory of Computing, ACM, 2000, pp. 504–513.kostakos2009temporal V. Kostakos, Temporal graphs, Physica A 388 (6) (2009) 1007–1023.casteigts2012time A. Casteigts, P. Flocchini, W. Quattrociocchi, N. Santoro, Time-varying graphs and dynamic networks, International Journal of Parallel, Emergent and Distributed Systems 27 (5) (2012) 387–408.berman1996vulnerability K. A. Berman, Vulnerability of scheduled networks and a generalization of Menger's theorem, Networks 28 (3) (1996) 125–134.holme2015modern P. Holme, Modern temporal network theory: A colloquium, The European Physical Journal B 88 (9) (2015) 1–30.xu2016representing J. Xu, T. L. Wickramarathne, N. V. Chawla, Representing higher-order dependencies in networks, Science Advances 2 (5) (2016) e1600028.krings2012effects G. Krings, M. Karsai, S. Bernhardsson, V. D. Blondel, J. Saramäki, Effects of time window size and placement on the structure of an aggregated communication network, EPJ Data Science 1 (1) (2012) 4.sekara2016fundamental V. Sekara, A. Stopczynski, S. Lehmann, Fundamental structures of dynamic social networks, Proceedings of the National Academy of Sciences 113 (36) (2016) 9977–9982.moody2002importance J. Moody, The importance of relationship timing for diffusion, Social Forces 81 (1) (2002) 25–56.lentz2013unfolding H. H. Lentz, T. Selhorst, I. M. Sokolov, Unfolding accessibility provides a macroscopic approach to temporal networks, Physical Review Letters 110 (11) (2013) 118701.tang2010analysing J. Tang, M. Musolesi, C. Mascolo, V. Latora, V. Nicosia, Analysing information flows and key mediators through temporal centrality metrics, in: Proceedings of the 3rd Workshop on Social Network Systems, ACM, 2010, p. 3.nicosia2013graph V. Nicosia, J. Tang, C. Mascolo, M. Musolesi, G. Russo, V. Latora, Graph metrics for temporal networks, in: Temporal Networks, Springer, 2013, pp. 15–40.pan2011path R. K. Pan, J. Saramäki, Path lengths, correlations, and centrality in temporal networks, Physical Review E 84 (1) (2011) 016105.scholtes2016higher I. Scholtes, N. Wider, A. Garas, Higher-order aggregate networks in the analysis of temporal networks: Path structures and centralities, The European Physical Journal B 89 (3) (2016) 1–15.karsai2011small M. Karsai, M. Kivelä, R. K. Pan, K. Kaski, J. Kertész, A.-L. Barabási, J. Saramäki, Small but slow world: How network topology and burstiness slow down spreading, Physical Review E 83 (2) (2011) 025102.starnini2012random M. Starnini, A. Baronchelli, A. Barrat, R. Pastor-Satorras, Random walks on temporal networks, Physical Review E 85 (5) (2012) 056115.masuda2013temporal N. Masuda, K. Klemm, V. M. Eguíluz, Temporal networks: Slowing down diffusion by long lasting interactions, Physical Review Letters 111 (18) (2013) 188701.scholtes2017network I. Scholtes, When is a network a network? multi-order graphical model selection in pathways and temporal networks, arXiv preprint arXiv:1702.05499.motegi2012network S. Motegi, N. Masuda, A network-based dynamical ranking system for competitive sports, Scientific Reports 2 (2012) 904.junior2012time P. S. P. Júnior, M. A. Gonçalves, A. H. Laender, T. Salles, D. Figueiredo, Time-aware ranking in sport social networks, Journal of Information and Data Management 3 (3) (2012) 195.kim2012temporal H. Kim, R. Anderson, Temporal node centrality in complex networks, Physical Review E 85 (2) (2012) 026107.schafer2007collaborative J. B. Schafer, D. Frankowski, J. Herlocker, S. Sen, Collaborative filtering recommender systems, in: The Adaptive Web, Springer, 2007, pp. 291–324.koren2011advances Y. Koren, R. Bell, Advances in collaborative filtering, in: Recommender Systems Handbook, Springer, 2011, pp. 145–186.bell2007lessons R. M. Bell, Y. Koren, Lessons from the Netflix Prize challenge, ACM SIGKDD Explorations Newsletter 9 (2) (2007) 75–79.bennett2007netflix J. Bennett, S. Lanning, The netflix prize, in: Proceedings of KDD Cup and Workshop, Vol. 2007, 2007, p. 35.gama2014survey J. Gama, I. Žliobaitė, A. Bifet, M. Pechenizkiy, A. Bouchachia, A survey on concept drift adaptation, ACM Computing Surveys 46 (4) (2014) 44.breese1998empirical J. S. Breese, D. Heckerman, C. Kadie, Empirical analysis of predictive algorithms for collaborative filtering, in: Proceedings of the 14th Conference on Uncertainty in Artificial Intelligence, Morgan Kaufmann Publishers Inc., 1998, pp. 43–52.takacs2007major G. Takács, I. Pilászy, B. Németh, D. Tikk, Major components of the Gravity recommendation system, ACM SIGKDD Explorations Newsletter 9 (2) (2007) 80–83.koren2009matrix Y. Koren, R. Bell, C. Volinsky, et al., Matrix factorization techniques for recommender systems, Computer 42 (8) (2009) 30–37.koren2008factorization Y. Koren, Factorization meets the neighborhood: a multifaceted collaborative filtering model, in: Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, 2008, pp. 426–434.picard1984cross R. R. Picard, R. D. Cook, Cross-validation of regression models, Journal of the American Statistical Association 79 (387) (1984) 575–583.arlot2010survey S. Arlot, A. Celisse, et al., A survey of cross-validation procedures for model selection, Statistics Surveys 4 (2010) 40–79.koren2010factor Y. Koren, Factor in the neighbors: Scalable and accurate collaborative filtering, ACM Transactions on Knowledge Discovery from Data 4 (1) (2010) 1.breiman1996bagging L. Breiman, Bagging predictors, Machine Learning 24 (2) (1996) 123–140.jahrer2010combining M. Jahrer, A. Töscher, R. Legenstein, Combining predictions for accurate recommender systems, in: Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, 2010, pp. 693–702.zhou2007bipartite T. Zhou, J. Ren, M. Medo, Y.-C. Zhang, Bipartite network projection and personal recommendation, Physical Review E 76 (4) (2007) 046115.yu2016network F. Yu, A. Zeng, S. Gillard, M. Medo, Network-based recommendation algorithms: A review, Physica A 452 (2016) 192.liu2011information J.-G. Liu, T. Zhou, Q. Guo, Information filtering via biased heat conduction, Physical Review E 84 (2011) 037101.qiu2011item T. Qiu, G. Chen, Z.-K. Zhang, T. Zhou, An item-oriented recommendation algorithm on cold-start problem, EPL 95 (2011) 58003.liao2014network H. Liao, R. Xiao, G. Cimini, M. Medo, Network-driven reputation in online scientific communities, PLoS ONE 9 (12) (2014) e112022.ziegler2005improving C.-N. Ziegler, S. M. McNee, J. A. Konstan, G. Lausen, Improving recommendation lists through topic diversification, in: Proceedings of the 14th International Conference on World Wide Web, ACM, 2005, pp. 22–32.zhang2008avoiding M. Zhang, N. Hurley, Avoiding monotony: Improving the diversity of recommendation lists, in: Proceedings of the 2008 ACM Conference on Recommender systems, ACM, 2008, pp. 123–130.adomavicius2012improving G. Adomavicius, Y. Kwon, Improving aggregate recommendation diversity using ranking-based techniques, IEEE Transactions on Knowledge and Data Engineering 24 (2012) 896–911.zeng2012reinforcing A. Zeng, C. H. Yeung, M.-S. Shang, Y.-C. Zhang, The reinforcing influence of recommendations on global diversification, EPL (Europhysics Letters) 97 (1) (2012) 18005.ceriani2012origins L. Ceriani, P. Verme, The origins of the gini index: extracts from variabilità e mutabilità (1912) by corrado gini, The Journal of Economic Inequality 10 (3) (2012) 421–443.bar2008informetrics J. Bar-Ilan, Informetrics at the beginning of the 21st century – a review, Journal of Informetrics 2 (1) (2008) 1–52.mingers2015review J. Mingers, L. Leydesdorff, A review of theory and practice in scientometrics, European Journal of Operational Research 246 (1) (2015) 1–19.de1965networks D. J. de Solla Price, Networks of scientific papers, Science 149 (3683) (1965) 510–515.medo2016model M. Medo, G. Cimini, Model-based evaluation of scientific impact indicators, Physical Review E 94 (3) (2016) 032312.ren2017time Z.-M. Ren, M. S. Mariani, Y.-C. Zhang, M. Medo, A time-respecting null model to explore the properties of growing networks, arXiv:1703.07656.van2004sleeping A. F. Van Raan, Sleeping beauties in science, Scientometrics 59 (3) (2004) 467–472.ke2015defining Q. Ke, E. Ferrara, F. Radicchi, A. Flammini, Defining and identifying sleeping beauties in science, Proceedings of the National Academy of Sciences 112 (24) (2015) 7426–7431.colavizza2016clustering G. Colavizza, M. Franceschet, Clustering citation histories in the physical review, Journal of Informetrics 10 (4) (2016) 1037–1051.wang2014comment J. Wang, Y. Mei, D. Hicks, Comment on “Quantifying long-term scientific impact”, Science 345 (6193) (2014) 149–149.wang2014response D. Wang, C. Song, H.-W. Shen, A.-L. Barabási, Response to Comment on “Quantifying long-term scientific impact”, Science 345 (6193) (2014) 149–149.cao2016data X. Cao, Y. Chen, K. R. Liu, A data analytic approach to quantifying scientific impact, Journal of Informetrics 10 (2) (2016) 471–484.petersen2014reputation A. M. Petersen, S. Fortunato, R. K. Pan, K. Kaski, O. Penner, A. Rungi, M. Riccaboni, H. E. Stanley, F. Pammolli, Reputation and impact in academic careers, Proceedings of the National Academy of Sciences 111 (43) (2014) 15316–15321.golosovsky2017growing M. Golosovsky, S. Solomon, Growing complex network of citations of scientific papers: Modeling and measurements, Physical Review E 95 (2017) 012324.hirsch2007does J. E. Hirsch, Does the h index have predictive power?, Proceedings of the National Academy of Sciences 104 (49) (2007) 19193–19198.acuna2012future D. E. Acuna, S. Allesina, K. P. Kording, Future impact: Predicting scientific success, Nature 489 (7415) (2012) 201–202.penner2013predictability O. Penner, R. K. Pan, A. M. Petersen, K. Kaski, S. Fortunato, On the predictability of future impact in science, Scientific Reports 3 (2013) 3052.zhang2016identifying C. Zhang, C. Liu, L. Yu, Z.-K. Zhang, T. Zhou, Identifying the academic rising stars, arXiv preprint arXiv:1606.05752.sinatra2016quantifying R. Sinatra, D. Wang, P. Deville, C. Song, A.-L. Barabási, Quantifying the evolution of individual scientific impact, Science 354 (6312) (2016) aaf5239.clauset2017data A. Clauset, D. B. Larremore, R. Sinatra, Data-driven predictions in the science of science, Science 355 (6324) (2017) 477–480.bergstrom2008eigenfactor C. T. Bergstrom, J. D. West, M. A. Wiseman, The eigenfactor metrics, The Journal of Neuroscience 28 (45) (2008) 11433–11434.gonzalez2010new B. González-Pereira, V. P. Guerrero-Bote, F. Moya-Anegón, A new approach to the metric of journals' scientific prestige: The SJR indicator, Journal of Informetrics 4 (3) (2010) 379–391.falagas2008top M. E. Falagas, V. G. Alexiou, The top-ten in journal impact factor manipulation, Archivum Immunologiae et Therapiae Experimentalis 56 (4) (2008) 223.wallner2009ban C. Wallner, Ban impact factor manipulation, Science 323 (5913) (2009) 461.bohlin2015robustness L. Bohlin, A. Viamontes Esquivel, A. Lancichinetti, M. Rosvall, Robustness of journal rankings by network flows with different amounts of memory, Journal of the Association for Information Science and Technology 67 (10) (2015) 2527–2535.van2010metrics R. Van Noorden, Metrics: A profusion of measures, Nature 465 (7300) (2010) 864–866.amin2003impact M. Amin, M. A. Mabe, Impact factors: use and abuse, Medicina (Buenos Aires) 63 (4) (2003) 347–354.plos2006impact P. M. Editors, et al., The impact factor game, PLoS Med 3 (6) (2006) e291.adler2009citation R. Adler, J. Ewing, P. Taylor, et al., Citation statistics, Statistical Science 24 (1) (2009) 1.kuznets1934national S. Kuznets, National income, 1929-1932, in: National Income, 1929-1932, NBER, 1934, pp. 1–12.costanza2014development R. Costanza, I. Kubiszewski, E. Giovannini, H. Lovins, J. McGlade, K. Pickett, K. Ragnarsdóttir, D. Roberts, R. De Vogli, R. Wilkinson, Development: Time to leave gdp behind., Nature 505 (7483) (2014) 283–285.coyle2015gdp D. Coyle, GDP: A brief but affectionate history, Princeton University Press, 2015.IMF World economic outlook of the international monetary fund, <http://www.imf.org/external/pubs/ft/weo/2016/01/index.htm>, accessed: 27-06-2016.hausmann2011network R. Hausmann, C. A. Hidalgo, The network structure of economic output, Journal of Economic Growth 16 (4) (2011) 309–342.felipe2012product J. Felipe, U. Kumar, A. Abdon, M. Bacate, Product complexity and economic development, Structural Change and Economic Dynamics 23 (1) (2012) 36–68.lorenz1969atmospheric E. N. Lorenz, Atmospheric predictability as revealed by naturally occurring analogues, Journal of the Atmospheric Sciences 26 (4) (1969) 636–646.lorenz1969three E. N. Lorenz, Three approaches to atmospheric predictability, Bull. Amer. Meteor. Soc 50 (3454) (1969) 349.wolf1985determining A. Wolf, J. B. Swift, H. L. Swinney, J. A. Vastano, Determining lyapunov exponents from a time series, Physica D: Nonlinear Phenomena 16 (3) (1985) 285–317.cencini2010chaos M. Cencini, F. Cecconi, A. Vulpiani, Chaos: From simple models to complex systems 17.gaulier2010baci G. Gaulier, S. Zignago, http://www.cepii.fr/CEPII/fr/publications/wp/abstract.asp?NoDoc=2726Baci: International trade database at the product-level. the 1994-2007 version, Working Papers 2010-23, CEPII (October 2010). <http://www.cepii.fr/CEPII/fr/publications/wp/abstract.asp?NoDoc=2726>lu2011link L. Lü, T. Zhou, Link prediction in complex networks: A survey, Physica A 390 (6) (2011) 1150–1170.lu2015toward L. Lü, L. Pan, T. Zhou, Y.-C. Zhang, H. E. Stanley, Toward link predictability of complex networks, Proceedings of the National Academy of Sciences 112 (8) (2015) 2325–2330.hart2006complete G. T. Hart, A. K. Ramani, E. M. Marcotte, How complete are current yeast and human protein-interaction networks?, Genome Biology 7 (11) (2006) 1.barzel2013network B. Barzel, A.-L. Barabási, Network link prediction by global silencing of indirect correlations, Nature Biotechnology 31 (8) (2013) 720–725.menche2015uncovering J. Menche, A. Sharma, M. Kitsak, S. D. Ghiassian, M. Vidal, J. Loscalzo, A.-L. Barabási, Uncovering disease-disease relationships through the incomplete interactome, Science 347 (6224) (2015) 1257601.liben2007link D. Liben-Nowell, J. Kleinberg, The link-prediction problem for social networks, Journal of the American society for information Science and Technology 58 (7) (2007) 1019–1031.scott2012social J. Scott, Social Network Analysis, Sage, 2012.li2016exploiting D. Li, Y. Zhang, Z. Xu, D. Chu, S. Li, Exploiting information diffusion feature for link prediction in sina weibo, Scientific Reports 6 (2016) 20058.liu2009link J. Liu, G. Deng, Link prediction in a user-object network based on time-weighted resource allocation, Physica A 388 (17) (2009) 3643–3650.tylenda2009towards T. Tylenda, R. Angelova, S. Bedathur, Towards time-aware link prediction in evolving social networks, in: Proceedings of the 3rd Workshop on Social Network Mining and Analysis, ACM, 2009, p. 9.dhote2013survey Y. Dhote, N. Mishra, S. Sharma, Survey and analysis of temporal link prediction in online social networks, in: 2013 International Conference on Advances in Computing, Communications and Informatics, IEEE, 2013, pp. 1178–1183.munasinghe2011time L. Munasinghe, R. Ichise, Time aware index for link prediction in social networks, in: International Conference on Data Warehousing and Knowledge Discovery, Springer, 2011, pp. 342–353.wang2017link T. Wang, X.-S. He, M.-Y. Zhou, Z.-Q. Fu, Link prediction in evolving networks based on the popularity of nodes, arXiv preprint arXiv:1610.05347.isella2011s L. Isella, J. Stehlé, A. Barrat, C. Cattuto, J.-F. Pinton, W. Van den Broeck, What's in a crowd? analysis of face-to-face behavioral networks, Journal of Theoretical Biology 271 (1) (2011) 166–180.chaintreau2007impact A. Chaintreau, P. Hui, J. Crowcroft, C. Diot, R. Gass, J. Scott, Impact of human mobility on opportunistic forwarding algorithms, IEEE Transactions on Mobile Computing 6 (6) (2007) 606–620.opsahl2009clustering T. Opsahl, P. Panzarasa, Clustering in weighted networks, Social Networks 31 (2) (2009) 155–163.moradabadi2016link B. Moradabadi, M. R. Meybodi, Link prediction based on temporal similarity metrics using continuous action set learning automata, Physica A 460 (2016) 361–373.zweig2011good K. Zweig, Good versus optimal: Why network analytic methods need more systematic evaluation, Open Computer Science 1 (1) (2011) 137–153.hofman2017prediction J. M. Hofman, A. Sharma, D. J. Watts, Prediction and explanation in social systems, Science 355 (6324) (2017) 486–488.subrahmanian2017predicting V. Subrahmanian, S. Kumar, Predicting human behavior: The next frontiers, Science 355 (6324) (2017) 489–489.zanin2016combining M. Zanin, A. Papo, P. A. Sousa, E. Menasalvas, A. Nicchi, E. Kubik, S. Boccaletti, Combining complex networks and data mining: Why and how, Physics Reports 635 (2016) 1–44.sidiropoulos2006generalized A. Sidiropoulos, Y. Manolopoulos, Generalized comparison of graph-based ranking algorithms for publications and authors, Journal of Systems and Software 79 (12) (2006) 1679–1700.dunaiski2016evaluating M. Dunaiski, W. Visser, J. Geldenhuys, Evaluating paper and author ranking algorithms using impact and contribution awards, Journal of Informetrics 10 (2) (2016) 392–407.fiala2012time D. Fiala, Time-aware PageRank for bibliographic networks, Journal of Informetrics 6 (3) (2012) 370–388.smith2014poverty C. Smith-Clarke, A. Mashhadi, L. Capra, Poverty on the cheap: Estimating poverty maps using aggregated mobile communication networks, in: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, 2014, pp. 511–520.radicchi2016quantifying F. Radicchi, A. Weissman, J. Bollen, Quantifying perceived impact of scientific publications, arXiv preprint arXiv:1612.03962.van2013non P. Van Mieghem, R. Van de Bovenkamp, Non-markovian infection spread dramatically alters the susceptible-infected-susceptible epidemic threshold in networks, Physical Review Letters 110 (10) (2013) 108701.koher2016infections A. Koher, H. H. Lentz, P. Hövel, I. M. Sokolov, Infections on temporal networks—a matrix-based approach, PONE 11 (4) (2016) e0151209.schubert1986relative A. Schubert, T. Braun, Relative indicators and relational charts for comparative assessment of publication output and citation impact, Scientometrics 9 (5-6) (1986) 281–291.vinkler1986evaluation P. Vinkler, Evaluation of some methods for the relative assessment of scientific publications, Scientometrics 10 (3-4) (1986) 157–177.zhang2014comparison Z. Zhang, Y. Cheng, N. C. Liu, Comparison of the effect of mean-based method and z-score for field normalization of citations at the level of web of science subject categories, Scientometrics 101 (3) (2014) 1679–1693.scholtes2014social I. Scholtes, R. Pfitzner, F. Schweitzer, The social dimension of information ranking: A discussion of research challenges and approaches, in: Socioinformatics: The Social Impact of Interactions between Humans and IT, Springer, 2014, pp. 45–61.pariser2011filter E. Pariser, The filter bubble: What the Internet is hiding from you, Penguin UK, 2011.del2016echo M. Del Vicario, G. Vivaldo, A. Bessi, F. Zollo, A. Scala, G. Caldarelli, W. Quattrociocchi, Echo chambers: Emotional contagion and group polarization on facebook, Scientific Reports 6 (2016) 37825.del2016spreading M. Del Vicario, A. Bessi, F. Zollo, F. Petroni, A. Scala, G. Caldarelli, H. E. Stanley, W. Quattrociocchi, The spreading of misinformation online, PNAS 113 (3) (2016) 554–559.piramuthu2012input S. Piramuthu, G. Kapoor, W. Zhou, S. Mauw, Input online review data and related bias in recommender systems, Decision Support Systems 53 (3) (2012) 418–424.ruusuvirta2009online O. Ruusuvirta, M. Rosema, Do online vote selectors influence electoral participation and the direction of the vote, in: ECPR General Conference, September, 2009, pp. 13–12.peoples2016twitter B. K. Peoples, S. R. Midway, D. Sackett, A. Lynch, P. B. Cooney, Twitter predicts citation rates of ecological research, PLoS ONE 11 (11) (2016) e0166570.fortunato2006scale S. Fortunato, A. Flammini, F. Menczer, Scale-free network growth by ranking, Physical Review Letters 96 (21) (2006) 218701.konig2011network M. D. König, C. J. Tessone, Network evolution based on centrality, Physical Review E 84 (5) (2011) 056108.konig2014nestedness M. D. König, C. J. Tessone, Y. Zenou, Nestedness in networks: A theoretical model and some applications, Theoretical Economics 9 (3) (2014) 695–752.sendina2016assortativity I. Sendiña-Nadal, M. M. Danziger, Z. Wang, S. Havlin, S. Boccaletti, Assortativity and leadership emerge from anti-preferential attachment in heterogeneous networks, Scientific reports 6 (2016) 21297.medo2016identification M. Medo, M. S. Mariani, A. Zeng, Y.-C. Zhang, Identification and modeling of discoverers in online social systems, Scientific Reports 6 (2016) 34218.tomasello2017data M. Tomasello, G. Vaccario, F. Schweitzer, Data-driven modeling of collaboration networks: A cross-domain analysis, arXiv:1704.01342.lohmann2010eigenvector G. Lohmann, D. S. Margulies, A. Horstmann, B. Pleger, J. Lepsien, D. Goldhahn, H. Schloegl, M. Stumvoll, A. Villringer, R. Turner, Eigenvector centrality mapping for analyzing connectivity patterns in fmri data of the human brain, PLoS ONE 5 (4) (2010) e10232.
http://arxiv.org/abs/1704.08027v1
{ "authors": [ "Hao Liao", "Manuel Sebastian Mariani", "Matus Medo", "Yi-Cheng Zhang", "Ming-Yang Zhou" ], "categories": [ "physics.soc-ph", "cs.DL", "cs.IR", "cs.SI" ], "primary_category": "physics.soc-ph", "published": "20170426091845", "title": "Ranking in evolving complex networks" }
A.N. Frumkin Institute of Physical Chemistry and Electrochemistry, Russian Academy of Science, 31 Leninsky Prospect, 119071 Moscow, RussiaA.N. Frumkin Institute of Physical Chemistry and Electrochemistry, Russian Academy of Science, 31 Leninsky Prospect, 119071 Moscow, Russia Institute of Mechanics, M.V. Lomonosov Moscow State University, 119991 Moscow, Russia [Corresponding author: ][email protected]. Frumkin Institute of Physical Chemistry andElectrochemistry, Russian Academy of Science, 31 Leninsky Prospect,119071 Moscow, RussiaDepartment of Physics, M.V. Lomonosov Moscow StateUniversity, 119991 Moscow, RussiaDWI - Leibniz Institute for Interactive Materials, Forckenbeckstr. 50, 52056 Aachen,GermanyWe consider pressure-driven flows in wide microchannels, and discuss how a transverse shear, generated by misaligned superhydrophobic walls,impacts cross-sectional spreading of Brownian particles. We show that such a transverse shear can induce an advective superdiffusion, which strongly enhances dispersion of particles compared to a normal diffusion, and that maximal cross-sectional spreading corresponds to a crossover between its subballistic and superballistic regimes. This allows us to argue that an advective superdiffusion can be used for boosting dispersion of particles at smaller Peclet numbers compared to known concepts of passive microfluidic mixing. This implies that our superdiffusion scenario allows one efficient mixing of much smaller particles or using much thinner microchannels than methods, which are currently being exploited. 83.50.Rp, 47.61.-k Advective superdiffusion in superhydrophobic microchannels Olga I. Vinogradova December 30, 2023 ========================================================== § INTRODUCTIONSuperhydrophobic (SH) textures in the Cassie state, where the texture is filled with gas, have motivated numerous studies during the past decade <cit.>. Such surfacesare important due to their superlubricating potential <cit.>. The use of highly anisotropic SH textures with generally tensorial effective hydrodynamic slip, 𝐛_ eff <cit.> (due to secondary flows transverse to the direction of the applied pressure gradient <cit.>), provides new possibilities for hydrodynamic flow manipulation <cit.>. Recent studies have employed transverse components of flow in SH channels to fractionate large non-Brownian microparticles <cit.> or enhance their mixing <cit.>. However, we are unaware of any previous work that has addressed the issue of diffusive transport of tiny Brownian particles by generating transverse flows in SH devices.Diffusive transport controls diverse situations in biology and chemistry <cit.>, and its understanding is very important in many areas including such as nanoswimmers propulsion <cit.> or interpretation of modern nanovelocimetry experiments <cit.>. Dispersion of tiny Brownian particles in a cross-section of a microchannel with smooth wallsat low Reynolds number , which is relevant to many applications, is difficult since the normal diffusion (characterized by the linear time dependence of the mean squared displacement, σ^2 ∝ t) is slowcompared with the convection of particles along the microchannel. Our strategy here is to enhance such a dispersion by using advective diffusion, which can be induced by generating a transverse component of flow. Transverse flow generated by herringbone patterns in the Wenzel state (when liquid follows the topological variations of the surface) has been already successfullyused for a passive chaotic mixing of particles in a microchannel of thickness H comparable to its width W and at very large Peclet number,  <cit.>. Here we suggest that dramatic improvement of a cross-sectional dispersion in a very wide channel, W≫ H, and at much smaller(which is equivalent to significantly reduced particle sizes or channel thickness) could be achieved by inducing a superdiffusion, i.e. a situation, when σ^2 ∝ t^α with α > 1. Depending on the value of α one usually distinguishes between subbalistic (1< α < 2), ballistic (α = 2), and superballistic (α > 2) regimes of superdiffusion <cit.>. The superdiffusion in a flow field has been studied by several groups for various macroscopic systems. Subballistic regime has been reported for a random velocity fields <cit.>, and superballistic dispersion has been predicted for turbulent <cit.> and for linear shear <cit.> flows,and for a solute transport in a heterogeneous medium <cit.>. Someefforts have also gone into investigating a role of confinement in the emergence of superdiffusion <cit.>. However, advective superdiffusion on microscales has never been predicted theoretically, nor has it been used for microfluidic applications.The presence of an additional variable H in the system implies that diffusive behavior in a confined complex flow should be different than it would be in bulk liquid or near a single interface. Could varioussuperdiffusive regimes be induced in microchannels with realistic parameters of the flow? How will they differ from the bulk systems if induced? What are possible implications for microfluidic mixing? These questions still remain open, and we are unaware of any previous attempts describing answers to them.In this paper we present a general strategy for inducing an advective superdiffusion in microchannels of a high aspect ratio, W/H≫ 1, that can be used for boosting dispersion of Brownian particles between streams of main (forward) Poiseuille flows. To enhance the mixing (homogenization) of particles over the cross section of the channel we use secondary (transverse) shear flows generated in microchannels decorated by crossed identical SH stripes <cit.> as sketched in Fig. <ref>. We show that such a flow configuration allows one to induce various scenarios of a superdiffusion, and argue that a crossover between subballistic and superballistic regimes provides large transverse dispersion of particles, which would be impossible in standard microchannels with smooth homogeneous walls or in devices, which are currently widely used as microfluidic mixers.§ SCALING THEORY We first present the scaling approach which we have developed to evaluate hydrodynamic dispersion in a Poiseuille flow with superimposed uniform transverse shear:[ U_x=U_sx+U_0( 1-4(z/H)^2),;U_y=2U_syz/H, ]where U_0=-H^2∇ P/(8μ) is the maximal velocity of a flow generated by pressure gradient ∇ Pwith no-slip walls, μ is the dynamic viscosity, U_sx and U_sy are the (positive definite) averaged forward and transverse slip velocities at channel walls located at z=± H/2. We note that the transverse shear rate is equal to 2U_sy/H.Brownian particles are injected from a point source located at (x,y,z)=(0,0,0) and then advected by the flow satisfying Eq.(<ref>). The particle flux across channel walls is equal to zero, we neglect their inertia, and focus on the diffusive regime.Since both U_x and U_y depend only on z, particle distribution in z-direction will be governed by normal Brownian diffusionwith zero average displacement. For an unbounded space we have: ⟨ z⟩=0, σ_z^2=⟨ (z-⟨ z⟩)^2⟩= 2Dt, where ⟨.⟩ denotes averaging over the ensemble of particles, and D is the diffusion coefficient. In our case diffusion is constrained by channel walls, so that some time later particles become uniformly distributed between them:[ σ_z^2=2Dt,t≪ t_d,; σ_z^2=H^2/12 , t≫ t_d ]Here we have defined the diffusion time scale, t_d=H^2/(2D), as a typical time for a single particle to cross the channel in z-direction.Particle dispersion in y-direction reflects an interplay between diffusion and transverse shear rate. Depending on t different scenarios of the particle spreading may occur. In the short time regime, i.e. for t≪ t_d, our shear flow could be treated as unbounded, since the spreading of particles is still unaffected by confinement. By substituting the expression for a transverse shear rate into a solution for a mean square displacement of Brownian particles in an unbounded linear shear <cit.> we obtainσ_y^2=2Dt(1+13(2U_sytH)^2), t≪ t_d.This expression defines a second time scale t_s=H/(2U_sy), which is associated to a transverse shear.We note that depending on the value of the Peclet number, =U_0 H/D, the ratio t_d/t_s=U_sy/U_0 can vary in a large interval. Two limits can now be discussed depending on the ratio t_d/t_s. When t_d/t_s ≪ 1, which is equivalent to ≪ U_0/U_sy, the normal Brownian diffusion of particles provides their efficient spreading in the channel since D is large. However, if t_d/t_s ≫ 1, which corresponds to ≫ U_0/U_sy (or small D), the normal diffusion is slow. Therefore, below we discuss this largerlimit in more detail. We note that in this situation if t≪ t_s, the mean square displacement of particles scales as σ^2_y∝ D t, indicating a normal Brownian diffusion. However, for t ≫ t_s, we deduce from Eq.(<ref>) σ^2_y ∝D t^3 U^2_sy H^-2, which suggests a superdiffusion of particles in a superballistic regime.In the long-time regime, t≫ t_d, multiple rebounds of particles from the channel wallsshould inevitably lead to a random variation of the transverse velocity even in our directed shear flow. This situation is similar to considered in prior work <cit.> onBrownian particles in a (bulk) random velocity field, which predicted σ^2_y ∝ t^3/2. The characteristic velocity and length are determined by U_sy and H, so that dimensional analysis immediately leads toσ^2_y ∝ D^-1/2U^2_sy H t^3/2, t≫ t_d.We now summarize the different scaling expressions for σ^2_y, which determine several diffusion-advection regimes when ≫ U_0/U_sy, and turn to dimensionless parameters:σ^2_y /H^2 ∝{[ t/t_d, t≪ t_s; (U_sy/U_0)^2 ^2 (t/t_d)^3,t_s≪ t≪ t_d; (U_sy/U_0)^2 ^2 (t/t_d)^3/2,t_d≪ t, ]. Eqs.(5) include the ratio U_sy/U_0, which depends on the superhydrophobic texture topology only. We focus here on microfluidic applications, and therefore it is not the time, but the channel length x = λ H serves as a main independent parameter of the problem. So we have to reformulate Eq.(<ref>) in terms of λ. A time required for particles to migrate along the channel is t= λ H /U_m, where U_m is a mean forward flow velocity in the locus of the assembly of particles. At t≪ t_d it is equal to the velocity at the midplane of the channel, U_m=U_0, but at t≥ t_d this will be the mean forward velocity in the channel, U_m=2U_0/3. Note that in both cases the relationship between λ H and t is linear, t/t_d∝λ/. Therefore, Eq.(<ref>) can be rewrittenasσ^2_y /H^2∝{[^-1λ, λ≪ U_0/U_sy,; (U_sy/U_0)^2 ^-1λ^3, U_0/U_sy≪λ≪,; ( U_sy/U_0)^2 ^1/2λ^3/2,λ≫. ].In the other limit of ≪ U_0/U_sy, a normal diffusion is expected as discussed above, so that in this case we should also get σ^2_y /H^2∝^-1λ. For a given diffusion coefficient we obtain for the superballistic regime σ _y^2∝ U_0^2t^3∝ U_0^-1 (when the flow is too fast the migration time is too small) while for the subballistic regime we get σ _y^2∝ U_0^2t^3/2∝ U_0^1/2. These two scalings imply the existence of optimum U_0 and corresponding _max.To illustrate this effect it is useful to divide the (λ, ) space into three regions of normal, subballistic and superballistic diffusion, where the above scaling expressions for σ^2_y /H^2 approximately hold. Such a diagram is plotted in Fig. <ref>. The crossover loci here simply indicate that limiting solutions for σ^2_y /H^2 given by Eqs.(<ref>) coincide for different regimes of diffusion. Apart from the curve λ = ^3/2 (U_0/U_sy)^2 (separating the normal and superballistic diffusion), other crossover loci, λ = (separating the subballistic and superballistic regions) and λ = U_0/U_sy (between the normal and superballistic diffusion) are straight lines. Of course, in reality at those curves, the limiting solutions for σ^2_y /H^2 crossover smoothly from one diffusion regime to another. We can now conclude that when λ is below U_0/U_sy only normal diffusion is expected. In other words, advective superdiffusion cannot be generated without large slip at SH walls. When λ is above U_0/U_sy three regimes can be attained depending on the value of . A very small Peclet number will lead to a normal diffusion, but at largerone can induce a subballistic and at very large- a superballistic regime. Fig. <ref> also immediately shows that the maximal spreading is attained at the crossover between subballistic and superballistic regimes. This means that it happens when the time required for particles to migrate forward to a given cross-section and to diffuse to the channel walls are comparable.§ SIMULATION METHOD We model amotion of Brownian particles, i.e. a situation when inertia is neglected and Langevin equations are reduced to first order 𝐱̇=𝐮(𝐱)+ 𝐫(t),where 𝐱 is a particle position, 𝐮(𝐱) is the velocity field of a fluid and 𝐫(t) is a random velocity component with a correlation time much smaller than other time scales in the system. To discretize the equation we introduce a time grid {t_k} with the step Δ t and keep the random component𝐫_k=(r_x, r_y,r_z) constant over the time step [t_k,t_k+Δ t] with random variables r_x,y,z taken from the Gaussian distribution with zero average and dispersion u_r.To validate that we model a dispersion of particles correctly we first measure their diffusion coefficient. We integrate theequation of particle motion for an ensemble of 500 particles released at 𝐱_0=(0,0,0) in a velocity field involving a uniform 𝐮= (U_0, 0, 0) and a random components 𝐫_k=(0, r_y,r_z), and then measure the dispersion of particle positions σ_y,z(t) at t≤ T. We then fit the dispersion curves using the standard scaling σ_y,z^2(t)=2D_y,zt. The diffusion coefficient is the same in all directions, D_z=D_y=D, and depends on the time step Δ t and the dispersion u_r as D=D^*u_r^2/(2Δ t), where D^* is a renormalization coefficient. We have computed the values of D^* using U_0=1, T=10, several u_r in the range from 0.05 to 0.5, and Δ t varying from 0.01 to 0.1. In all the cases, the simulations give D^*=1, which confirms that the scaling holds in the whole range of our parameters. The velocity field in the SH channel is calculated using the solution of Stokes equations valid at H/L≃ 1 <cit.>: [ 𝐮=⟨𝐮⟩ +𝐮_1+𝐮_2.; ]Here⟨𝐮⟩=(U_x,U_y,0) is the averaged flow profile defined by Eq.(<ref>) and 𝐮_1, 𝐮_2 are perturbations with zero mean over the cell volume due to heterogeneous slippage at the lower and upper walls, respectively. The perturbation fields 𝐮_1 and 𝐮_2 are obtained using Fourier series with 50 harmonics <cit.>.In simulations of superdiffusive regimes we also use an ensemble of 500 particles. To calculate the contribution of the fluid velocity field, 𝐮(𝐱), the equation of motion, Eq.(<ref>), is solved using 4-th order Runge-Kutta method with the time step Δ t=0.01 L/U_0. A random component u_r=√(2 D Δ t) is constant over the time step. In these simulations bounce-back boundary conditions are applied at the channel walls. § RESULTS AND DISCUSSION In order to assess the validity of the above scaling approach we now model a situation when the transverse shear is created by SH walls <cit.>, as sketched in Fig. <ref>. Specifically, we consider a pressure-driven flow between two parallel SH surfaces separated by the distance H, which are decorated with identical periodic stripes of a period L and a fraction of the gas area ϕ. We assume SH surfaces to be flat with no meniscus curvature, so that the gas area is characterized by a local slip length b only, and solid area has no-slip. The lower and the upper wall textures are misaligned by an angle π/2, and we align the x-axis and the pressure gradient with this angle bisector. The Reynolds number =ρ U_0H/μ, where ρ is the fluid density, is considered to be small, i.e. ≪ 1, so that the flow satisfiesthe Stokes equations.The typical velocity field in such a channel has been calculated following the method described before <cit.>, and its typical cross-section is shown in Fig. <ref>. Fig. <ref> illustrates that the transverse velocity is strongly inhomogeneous. The inverse average transverse velocity, U_0/U_sy, which controls the hydrodynamic dispersion, can be obtained by averaging the 3D velocity field 𝐮(x,y,z) over the periodic cell in x,y-plane. If H = O(L) or larger, it can be evaluated by using a simple expression U_0/U_sy≃ (1+2β_+)/(4β_-), where β_±=(b_eff^± b_eff^⊥)/(2H) with b_eff^,⊥ the eigenvalues of the slip length tensor, 𝐛_ eff, for a channel of a finite H/L with one SH and one no-slip hydrophilic walls <cit.>. These eigenvalues have been calculated before <cit.>, and they depend on b, L/H, and ϕ. In our simulations below we use b=∞ since it maximizes the effective slip. We employ L/H=2 since this provides a significant transverse shear <cit.>. Finally, weconsider several textures, with ϕ varying in the interval from ϕ=0.25 to 0.9, which with prescribed parameters give a variation of U_0/U_sy from ≃ 11.5 to 1.2. With these values at moderate Peclet numbers one can expect various regimes of superdiffusion at an appropriate value of λ.We now inject a large number of Brownian particles in the channel, track their instantaneous positions and evaluate the dispersion σ_y at a given time. We have first plotted in Fig. <ref> the simulation results for σ_y/( H) as a function of t/t_d obtained at ϕ=0.5 and several . A general conclusion from this plot is that the above scaling predictions given by Eq.(<ref>) are in good agreement with simulation results. Thus, we see that all curves indeed overlap at long time. For relatively large Peclet numbers, ≥ 200, simulation data confirm the superballistic scaling (t/t_d)^3/2, but at smaller Peclet numbers, i.e. ≤ 100, our results fully validate the predicted subballistic scaling (t/t_d)^3/4.To examine the difference between different advective superdiffusion regimes we now varyat fixed ϕ = 0.5 and determine positions of individual particles at a given cross-section λ =50. For this gas area fraction U_0/U_sy≃ 7.4, therefore, according to Fig. <ref>, the values of = 10, 50, and 300should lead to a subballistic regime of superdiffusion,a crossover between subballistic and superballistic regimes, and a superballistic superdiffusion, correspondingly. The simulation results are shown in Fig. <ref>(a), (b), and (c). We see that a crossover between subbalistic and superballistic regimes does lead to a homogeneous distribution of particles. A subbalistic regime results in a rather homogeneous distribution of particles by the height of a channel, but σ_y remains small, so that particles are still focussed near the midplane of the channel, y=0. In contrast, in the superballistic regime the particle spreading in the y-direction is large, but due to small σ_z the distribution of particles in the cross-section is highly inhomogeneous. Finally, we explore in more details a situation of a maximal transverse hydrodynamic dispersion, σ_max/H, which occurs when the scaling law_ max∝λ,is valid. Fig.<ref>(a) shows σ_y/H, vs. , calculated at fixed ϕ=0.75 and several λ. It can be seen that σ_y/H increases with λ, and that for a given λ there indeed exists a Peclet number, _max, which maximize the dispersion. Note that the induced by superdiffusion transverse dispersion is large, already at moderate λ = 50 it could be several times larger than the channel thickness, of course, providedis optimal. The scaling low, Eq.(<ref>), predicts _max is growing linearly with λ. This is indeed the tendency shown by the simulation results. We now reproduce the data set from Fig. <ref>(a) in Fig. <ref>(b), but scale both coordinates by λ. Remarkably, and in agreement with our scaling analysis, simulation data obtained for several λ do collapse into a single curve. This plot allows us to obtain a scaling prefactor in Eq.(<ref>), which for a given ϕ=0.75 is found to be ≃ 1 (see Appendix <ref>). Similar curves,σ_y/(H λ) vs. / λ, have been calculated for several gas area fractions, and we have again found that at a given ϕ they nearly coincide (see Appendix <ref>). We have then obtained from these simulation data the values of σ_max/(H λ) and _max/λ (which gives us exactly the scaling prefactor), and the results are plotted in Fig.<ref> as a function of ϕ. A first conclusion emerging from this plot is that ϕ is one of the key parameters determining the maximal value of a transverse dispersion, σ_max/(H λ). Fig. <ref>(a) shows that in the low ϕ the transverse hydrodynamic dispersion is very small. It grows with the gas area fraction and reaches the maximum at ϕ≃ 0.75(see Appendix <ref> for interpretation of this result). We also note that the scaling prefactor in Eq.(<ref>) decays with ϕ as seen in Fig. <ref>(b). Altogether the above simulation results do confirm our simple scaling lows. We can thereforeconclude that these expressions provide us with a correct picture of the superdiffusive behavior of particles in the flow, even thought they overlook many details.§ FINAL REMARKS In conclusion, we believe we have provided a satisfactory answer to several questions posed at the beginning of this paper. We have shown that by using wide microchannels with misaligned striped SH walls it is possible to induce an advective superdiffusion of Brownian particles, which could not be achieved a standard microfluidic devices with a smooth walls or in devices, which are currently used to enhance mixing at the microscale. We have developed scaling laws forregimes of advective superdiffusion in such a microchannel, providing explicit expressions for mean square displacement of particles as a function of channel thickness and length, Peclet number, and slip velocity at the walls. These scaling results have been validated by means of computer simulations. It has also been shown that the advective superdiffusion could be used to efficiently mix Brownian particles, which is important in a variety of applications.Certain aspects of our work warrant further comments. A striking conclusion from our work is that the surface textures which optimize σ_max/(H λ) differ from those optimizing effective (forward) slip. It is well known that the effective slip of SH surfaces is maximized by increasing the gas-liquid area fraction <cit.>. In contrast, we have shown that dispersion of Brownian particles in SH microchannels is maximized by stripes with a smaller gas fraction, ϕ≃ 0.75. In this situation, the effective slip is relatively small, and yet the dispersion of particles is very strong. We should also like to stress that in a bulk or near a single interface the optimal spreading of Brownian particles would obviously occur in a superballistic regime. Our work has shown that that contrary to a bulk situation maximal cross-sectional spreading at a finite H corresponds to a crossover between subballistic and superballistic regimes of a superdiffusion. Therefore, one can conclude that a superdiffusive behaviour of particles in a confined flow is indeed very different from expected for unbounded systems.Prior work on passive micromixing <cit.> has exploited microchannels of W/H =O(1) and very large Peclet numbers, =10^3-10^6. This implies that previous methods have been designed to efficiently mix particles of a micron size or slightly smaller. We have addressed a different flow configuration of W/H ≫ 1, and have argued that an advective superdiffusion can be used for boosting dispersion of particles at much smallercompared to known concepts of passive microfluidic mixing. Our optimalhas been found to be of the order of 100 and smaller. This implies that at the same velocities of a mean flow we could provide mixing of particles of the size 10-50 nm (i.e. of truly nanoparticles, including proteins, viruses, etc). Alternatively, our concept allows one mixing of particles of the size of 0.1-1μ m at the same flow rate, Q=U_0HW, but in channels of much smaller H. This research was partly supported by the Russian Foundation for Basic Research (grant15-01-03069).§ CALCULATIONS OF OPTIMAL PECLET NUMBER AND DISPERSIONTo estimate the optimal Peclet number and its dependence on texture parameters we run simulations for several λ=25,50,100 and several ϕ=0.25, 0.5, 0.75, 0.9. The σ_y/(Hλ) vs. /λ curves are plotted in Fig.<ref>. We see that at a given ϕ results for different λ collapse into a single curve. To obtain the position of the maximum we fit these curves to a functionf_σ=p_1 x^2 + p_2 x + p_3 x^2 + p_4 x + p_5,where p_i, i=1-5, are the fitting coefficients. To find the best fit for p_i we use the combined data set for all three values of λ. The values _max/λ and σ_max/(Hλ) are then calculated from the fitting function for each ϕ.The error bars in Fig.7 for the maximal dispersion Δσ_max are found byestimating the root-mean-square deviation between the fit and the data.The error in _max is estimated by expanding the fit function around its maximum, Δ∼√(Δσ_max/f”_σ(_max/λ)). § THE EFFECT OF LARGE FORWARD SLIP ON THE DISPERSION The scaling Eq.(<ref>) has been obtained assuming that the slip velocity at the wall is small compared to that of the Poiseuille flow, U_sx,U_sy≪ U_0, so that they do not include a forward slip U_sx. In this Appendix, we estimate the contribution of large forward slip, U_sx, to σ _y and Pe_max. When U_sx, U_sy≃ U_0 the mean forward flow velocity isU_m={[U_0+U_sx,t≪ t_d,; 2U_0/3+U_sx,t≫ t_d. ] . Since t=λ H/U_m,Eqs.(<ref>) and (<ref>) can be modified to give σ _y^2H^2∝{[ ( U_sy/U_0) ^2λ ^3( 1+U_sx/U_0) ^3Pe , U_0/U_sy≪λ≪Pe,; ( U_sy/U_0) ^2Pe^1/2λ ^3/2( 2/3+U_sx/U_0) ^3/2,λ≫Pe ] .andσ _y^2( Hλ) ^2∝{[ ( U_sy/U_0) ^2( 1+U_sx/U_0) ^3 Pe/λ , U_0/U_sy≪λ≪Pe,; ( U_sy/U_0) ^2( 2/3+U_sx/U_0) ^3/2( Pe/λ) ^1/2,λ≫Pe. ] . When ϕ→ 1, both U_sy and U_sx become large, and the dispersion for the first regime decays with ϕ since σ _y^2/H^2∝ U_0U_sy^2U_sx^-3. However, it grows for the second regime, σ _y^2/H^2∝ U_0^1/2U_sy^2U_sx^-3/2. The values of Pe_max and σ _max can be deduced from a crossover between two regimes. Straightforward calculations lead to Pe_max≃λ2/3+U_sx/U_0( 1+U_sx/ U_0) ^2, and σ _max≃λU_sy/U_0( 2/3+U_sx/U_0 ) ( 1+U_sx/U_0) . Therefore, we can conclude that both Pe_max and σ _max decay when ϕ→ 1, and the maximal dispersion is attained at smaller ϕ. apsrev4-1
http://arxiv.org/abs/1704.08272v2
{ "authors": [ "Tatiana V. Nizkaya", "Evgeny S. Asmolov", "Olga I. Vinogradova" ], "categories": [ "physics.flu-dyn" ], "primary_category": "physics.flu-dyn", "published": "20170426180614", "title": "Advective superdiffusion in superhydrophobic microchannels" }
=1000 =1000
http://arxiv.org/abs/1704.08268v1
{ "authors": [ "Katie Chamberlain", "Nicolas Yunes" ], "categories": [ "gr-qc" ], "primary_category": "gr-qc", "published": "20170426180126", "title": "Theoretical Physics Implications of Gravitational Wave Observation with Future Detectors" }